feat: at helm deployment

This commit is contained in:
Morten Olsen
2025-10-18 00:01:58 +02:00
parent 1f7837cabc
commit 47a8dd96c2
26 changed files with 2080 additions and 73 deletions

View File

@@ -0,0 +1,21 @@
---
agent: build
description: Implement an approved OpenSpec change and keep tasks in sync.
---
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
**Steps**
Track these steps as TODOs and complete them one by one.
1. Read `changes/<id>/proposal.md`, `design.md` (if present), and `tasks.md` to confirm scope and acceptance criteria.
2. Work through tasks sequentially, keeping edits minimal and focused on the requested change.
3. Confirm completion before updating statuses—make sure every item in `tasks.md` is finished.
4. Update the checklist after all work is done so each task is marked `- [x]` and reflects reality.
5. Reference `openspec list` or `openspec show <item>` when additional context is required.
**Reference**
- Use `openspec show <id> --json --deltas-only` if you need additional context from the proposal while implementing.
<!-- OPENSPEC:END -->

View File

@@ -0,0 +1,19 @@
---
agent: build
description: Archive a deployed OpenSpec change and update specs.
---
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
**Steps**
1. Identify the requested change ID (via the prompt or `openspec list`).
2. Run `openspec archive <id> --yes` to let the CLI move the change and apply spec updates without prompts (use `--skip-specs` only for tooling-only work).
3. Review the command output to confirm the target specs were updated and the change landed in `changes/archive/`.
4. Validate with `openspec validate --strict` and inspect with `openspec show <id>` if anything looks off.
**Reference**
- Inspect refreshed specs with `openspec list --specs` and address any validation issues before handing off.
<!-- OPENSPEC:END -->

View File

@@ -0,0 +1,29 @@
---
agent: build
description: Scaffold a new OpenSpec change and validate strictly.
---
The user has requested the following change proposal. Use the openspec instructions to create their change proposal.
<UserRequest>
$ARGUMENTS
</UserRequest>
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
- Identify any vague or ambiguous details and ask the necessary follow-up questions before editing files.
**Steps**
1. Review `openspec/project.md`, run `openspec list` and `openspec list --specs`, and inspect related code or docs (e.g., via `rg`/`ls`) to ground the proposal in current behaviour; note any gaps that require clarification.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, and `design.md` (when needed) under `openspec/changes/<id>/`.
3. Map the change into concrete capabilities or requirements, breaking multi-scope efforts into distinct spec deltas with clear relationships and sequencing.
4. Capture architectural reasoning in `design.md` when the solution spans multiple systems, introduces new patterns, or demands trade-off discussion before committing to specs.
5. Draft spec deltas in `changes/<id>/specs/<capability>/spec.md` (one folder per capability) using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement and cross-reference related capabilities when relevant.
6. Draft `tasks.md` as an ordered list of small, verifiable work items that deliver user-visible progress, include validation (tests, tooling), and highlight dependencies or parallelizable work.
7. Validate with `openspec validate <id> --strict` and resolve every issue before sharing the proposal.
**Reference**
- Use `openspec show <id> --json --deltas-only` or `openspec show <spec> --type spec` to inspect details when validation fails.
- Search existing requirements with `rg -n "Requirement:|Scenario:" openspec/specs` before writing new ones.
- Explore the codebase with `rg <keyword>`, `ls`, or direct file reads so proposals align with current implementation realities.
<!-- OPENSPEC:END -->

45
AGENTS.md Normal file
View File

@@ -0,0 +1,45 @@
<!-- OPENSPEC:START -->
# OpenSpec Instructions
These instructions are for AI assistants working in this project.
Always open `@/openspec/AGENTS.md` when the request:
- Mentions planning or proposals (words like proposal, spec, change, plan)
- Introduces new capabilities, breaking changes, architecture shifts, or big performance/security work
- Sounds ambiguous and you need the authoritative spec before coding
Use `@/openspec/AGENTS.md` to learn:
- How to create and apply change proposals
- Spec format and conventions
- Project structure and guidelines
Keep this managed block so 'openspec update' can refresh the instructions.
<!-- OPENSPEC:END -->
# Backbone - Agent Development Guide
## Commands
- Build: `pnpm build` (TypeScript compilation - run before committing)
- Lint: `pnpm test:lint` (ESLint - run before committing)
- Test all: `pnpm test:unit` (runs all Vitest tests)
- Test single: `pnpm test:unit tests/mqtt.test.ts` (run specific test file)
- Dev: `pnpm dev` (watch mode with auto-reload)
## Code Style (enforced by ESLint/Prettier)
- **NO default exports** - use named exports only (`export { ClassName }`)
- **Type definitions**: use `type`, NOT `interface` (`type Foo = { ... }`)
- **File extensions**: always include `.ts` in imports (`from './file.ts'`)
- **Import paths**: use `#root/*` alias for src/ (`#root/utils/services.ts`)
- **Import order**: builtin → external → internal → parent → sibling → index (newlines between groups)
- **Private fields**: use `#` prefix for private class members (`#services: Services`)
- **Formatting**: 120 char width, single quotes, 2 spaces, semicolons, trailing commas
- **Exports**: exports must be last in file (`import/exports-last` rule)
## Patterns
- **Dependency injection**: use `Services` container - constructor takes `services: Services`, access via `this.#services.get(ClassName)`
- **Validation**: use Zod schemas for all data validation
- **Types**: leverage TypeScript strict mode - no implicit any, null checks required

6
chart/Chart.yaml Normal file
View File

@@ -0,0 +1,6 @@
apiVersion: v2
name: backbone
description: A Helm chart for deploying the backbone server
type: application
version: 0.2.0
appVersion: '1.0.0'

View File

@@ -0,0 +1,60 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "backbone.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
*/}}
{{- define "backbone.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "backbone.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "backbone.labels" -}}
helm.sh/chart: {{ include "backbone.chart" . }}
{{ include "backbone.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "backbone.selectorLabels" -}}
app.kubernetes.io/name: {{ include "backbone.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "backbone.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "backbone.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,16 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "backbone.fullname" . }}
labels:
{{- include "backbone.labels" . | nindent 4 }}
rules:
- apiGroups: ['']
resources: ['secrets']
verbs: ['create', 'get', 'watch', 'list']
- apiGroups: ['backbone.mortenolsen.pro']
resources: ['*']
verbs: ['get', 'watch', 'list', 'patch', 'create', 'update', 'replace']
- apiGroups: ['apiextensions.k8s.io']
resources: ['customresourcedefinitions']
verbs: ['get', 'create', 'update', 'replace', 'patch']

View File

@@ -0,0 +1,14 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "backbone.fullname" . }}
labels:
{{- include "backbone.labels" . | nindent 4 }}
subjects:
- kind: ServiceAccount
name: {{ include "backbone.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "backbone.fullname" . }}
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,133 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "backbone.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "backbone.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "backbone.selectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "backbone.selectorLabels" . | nindent 8 }}
spec:
serviceAccountName: {{ include "backbone.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
ports:
- name: http
containerPort: {{ .Values.config.httpPort }}
protocol: TCP
- name: tcp
containerPort: {{ .Values.config.tcpPort }}
protocol: TCP
env:
- name: TZ
value: {{ .Values.timezone | quote }}
{{- if .Values.config.adminToken }}
- name: ADMIN_TOKEN
value: {{ .Values.config.adminToken | quote }}
{{- end }}
{{- if .Values.config.jwtSecret }}
- name: JWT_SECRET
value: {{ .Values.config.jwtSecret | quote }}
{{- end }}
- name: HTTP_PORT
value: {{ .Values.config.httpPort | quote }}
- name: TCP_PORT
value: {{ .Values.config.tcpPort | quote }}
- name: K8S_ENABLED
value: {{ .Values.k8s.enabled | quote }}
- name: WS_ENABLED
value: {{ .Values.ws.enabled | quote }}
- name: API_ENABLED
value: {{ .Values.api.enabled | quote }}
- name: TCP_ENABLED
value: {{ .Values.tcp.enabled | quote }}
{{- if .Values.oidc.enabled }}
- name: OIDC_ENABLED
value: "true"
- name: OIDC_DISCOVERY
value: {{ .Values.oidc.discovery | quote }}
- name: OIDC_CLIENT_ID
value: {{ .Values.oidc.clientId | quote }}
- name: OIDC_CLIENT_SECRET
value: {{ .Values.oidc.clientSecret | quote }}
- name: OIDC_GROUP_FIELD
value: {{ .Values.oidc.groupField | quote }}
{{- if .Values.oidc.adminGroup }}
- name: OIDC_ADMIN_GROUP
value: {{ .Values.oidc.adminGroup | quote }}
{{- end }}
{{- if .Values.oidc.writerGroup }}
- name: OIDC_WRITER_GROUP
value: {{ .Values.oidc.writerGroup | quote }}
{{- end }}
{{- if .Values.oidc.readerGroup }}
- name: OIDC_READER_GROUP
value: {{ .Values.oidc.readerGroup | quote }}
{{- end }}
{{- end }}
{{- if .Values.redis.enabled }}
- name: REDIS_ENABLED
value: "true"
- name: REDIS_HOST
value: {{ .Values.redis.host | quote }}
- name: REDIS_PORT
value: {{ .Values.redis.port | quote }}
{{- if .Values.redis.password }}
- name: REDIS_PASSWORD
value: {{ .Values.redis.password | quote }}
{{- end }}
- name: REDIS_DB
value: {{ .Values.redis.db | quote }}
{{- end }}
{{- if .Values.persistence.enabled }}
volumeMounts:
- name: data
mountPath: /data
{{- end }}
{{- if .Values.probes.liveness.enabled }}
livenessProbe:
{{- omit .Values.probes.liveness "enabled" | toYaml | nindent 12 }}
{{- end }}
{{- if .Values.probes.readiness.enabled }}
readinessProbe:
{{- omit .Values.probes.readiness "enabled" | toYaml | nindent 12 }}
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.persistence.enabled }}
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ include "backbone.fullname" . }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.httpService.enabled -}}
apiVersion: homelab.mortenolsen.pro/v1
kind: HttpService
metadata:
name: '{{ include "backbone.fullname" . }}'
namespace: '{{ .Release.Namespace }}'
spec:
environment: '{{ .Values.httpService.environment }}'
subdomain: '{{ .Values.httpService.subdomain }}'
destination:
host: '{{ include "backbone.fullname" . }}'
port:
number: 80
{{- end }}

View File

@@ -0,0 +1,42 @@
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "backbone.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "backbone.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
name: {{ include "backbone.fullname" $ }}-http
port:
number: {{ $.Values.service.http.port }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,22 @@
{{- if .Values.persistence.enabled -}}
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "backbone.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "backbone.labels" . | nindent 4 }}
{{- with .Values.persistence.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
accessModes:
- {{ .Values.persistence.accessMode }}
{{- if .Values.persistence.storageClass }}
storageClassName: {{ .Values.persistence.storageClass | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- end }}

View File

@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "backbone.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "backbone.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,42 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "backbone.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "backbone.labels" . | nindent 4 }}
{{- with .Values.service.http.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.http.type }}
ports:
- port: {{ .Values.service.http.port }}
targetPort: {{ .Values.service.http.targetPort }}
protocol: TCP
name: http
selector:
{{- include "backbone.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "backbone.fullname" . }}-tcp
namespace: {{ .Release.Namespace }}
labels:
{{- include "backbone.labels" . | nindent 4 }}
{{- with .Values.service.tcp.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
type: {{ .Values.service.tcp.type }}
ports:
- port: {{ .Values.service.tcp.port }}
targetPort: {{ .Values.service.tcp.targetPort }}
protocol: TCP
name: tcp
selector:
{{- include "backbone.selectorLabels" . | nindent 4 }}

125
chart/values.yaml Normal file
View File

@@ -0,0 +1,125 @@
image:
repository: ghcr.io/morten-olsen/backbone
tag: main
pullPolicy: IfNotPresent
nameOverride: ''
fullnameOverride: ''
serviceAccount:
create: true
annotations: {}
name: ''
httpService:
enabled: false
subdomain: backbone
environment: ''
config:
adminToken: ''
jwtSecret: ''
httpPort: 8883
tcpPort: 1883
k8s:
enabled: true
ws:
enabled: true
api:
enabled: true
tcp:
enabled: true
oidc:
enabled: false
discovery: ''
clientId: ''
clientSecret: ''
groupField: 'groups'
adminGroup: ''
writerGroup: ''
readerGroup: ''
redis:
enabled: false
host: 'localhost'
port: 6379
password: ''
db: 0
persistence:
enabled: false
storageClass: ''
accessMode: ReadWriteOnce
size: 1Gi
annotations: {}
service:
http:
type: ClusterIP
port: 80
targetPort: 8883
annotations: {}
tcp:
type: ClusterIP
port: 1883
targetPort: 1883
annotations: {}
ingress:
enabled: false
className: ''
annotations: {}
hosts:
- host: chart-example.local
paths:
- path: /
pathType: Prefix
tls: []
resources: {}
probes:
liveness:
enabled: true
httpGet:
path: /health
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readiness:
enabled: true
httpGet:
path: /health
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
nodeSelector: {}
tolerations: []
affinity: {}
podAnnotations: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
timezone: Europe/Amsterdam

456
openspec/AGENTS.md Normal file
View File

@@ -0,0 +1,456 @@
# OpenSpec Instructions
Instructions for AI coding assistants using OpenSpec for spec-driven development.
## TL;DR Quick Checklist
- Search existing work: `openspec spec list --long`, `openspec list` (use `rg` only for full-text search)
- Decide scope: new capability vs modify existing capability
- Pick a unique `change-id`: kebab-case, verb-led (`add-`, `update-`, `remove-`, `refactor-`)
- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
- Validate: `openspec validate [change-id] --strict` and fix issues
- Request approval: Do not start implementation until proposal is approved
## Three-Stage Workflow
### Stage 1: Creating Changes
Create proposal when you need to:
- Add features or functionality
- Make breaking changes (API, schema)
- Change architecture or patterns
- Optimize performance (changes behavior)
- Update security patterns
Triggers (examples):
- "Help me create a change proposal"
- "Help me plan a change"
- "Help me create a proposal"
- "I want to create a spec proposal"
- "I want to create a spec"
Loose matching guidance:
- Contains one of: `proposal`, `change`, `spec`
- With one of: `create`, `plan`, `make`, `start`, `help`
Skip proposal for:
- Bug fixes (restore intended behavior)
- Typos, formatting, comments
- Dependency updates (non-breaking)
- Configuration changes
- Tests for existing behavior
**Workflow**
1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
4. Run `openspec validate <id> --strict` and resolve any issues before sharing the proposal.
### Stage 2: Implementing Changes
Track these steps as TODOs and complete them one by one.
1. **Read proposal.md** - Understand what's being built
2. **Read design.md** (if exists) - Review technical decisions
3. **Read tasks.md** - Get implementation checklist
4. **Implement tasks sequentially** - Complete in order
5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
### Stage 3: Archiving Changes
After deployment, create separate PR to:
- Move `changes/[name]/``changes/archive/YYYY-MM-DD-[name]/`
- Update `specs/` if capabilities changed
- Use `openspec archive [change] --skip-specs --yes` for tooling-only changes
- Run `openspec validate --strict` to confirm the archived change passes checks
## Before Any Task
**Context Checklist:**
- [ ] Read relevant specs in `specs/[capability]/spec.md`
- [ ] Check pending changes in `changes/` for conflicts
- [ ] Read `openspec/project.md` for conventions
- [ ] Run `openspec list` to see active changes
- [ ] Run `openspec list --specs` to see existing capabilities
**Before Creating Specs:**
- Always check if capability already exists
- Prefer modifying existing specs over creating duplicates
- Use `openspec show [spec]` to review current state
- If request is ambiguous, ask 12 clarifying questions before scaffolding
### Search Guidance
- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
- Show details:
- Spec: `openspec show <spec-id> --type spec` (use `--json` for filters)
- Change: `openspec show <change-id> --json --deltas-only`
- Full-text search (use ripgrep): `rg -n "Requirement:|Scenario:" openspec/specs`
## Quick Start
### CLI Commands
```bash
# Essential commands
openspec list # List active changes
openspec list --specs # List specifications
openspec show [item] # Display change or spec
openspec diff [change] # Show spec differences
openspec validate [item] # Validate changes or specs
openspec archive [change] [--yes|-y] # Archive after deployment (add --yes for non-interactive runs)
# Project management
openspec init [path] # Initialize OpenSpec
openspec update [path] # Update instruction files
# Interactive mode
openspec show # Prompts for selection
openspec validate # Bulk validation mode
# Debugging
openspec show [change] --json --deltas-only
openspec validate [change] --strict
```
### Command Flags
- `--json` - Machine-readable output
- `--type change|spec` - Disambiguate items
- `--strict` - Comprehensive validation
- `--no-interactive` - Disable prompts
- `--skip-specs` - Archive without spec updates
- `--yes`/`-y` - Skip confirmation prompts (non-interactive archive)
## Directory Structure
```
openspec/
├── project.md # Project conventions
├── specs/ # Current truth - what IS built
│ └── [capability]/ # Single focused capability
│ ├── spec.md # Requirements and scenarios
│ └── design.md # Technical patterns
├── changes/ # Proposals - what SHOULD change
│ ├── [change-name]/
│ │ ├── proposal.md # Why, what, impact
│ │ ├── tasks.md # Implementation checklist
│ │ ├── design.md # Technical decisions (optional; see criteria)
│ │ └── specs/ # Delta changes
│ │ └── [capability]/
│ │ └── spec.md # ADDED/MODIFIED/REMOVED
│ └── archive/ # Completed changes
```
## Creating Change Proposals
### Decision Tree
```
New request?
├─ Bug fix restoring spec behavior? → Fix directly
├─ Typo/format/comment? → Fix directly
├─ New feature/capability? → Create proposal
├─ Breaking change? → Create proposal
├─ Architecture change? → Create proposal
└─ Unclear? → Create proposal (safer)
```
### Proposal Structure
1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
2. **Write proposal.md:**
```markdown
## Why
[1-2 sentences on problem/opportunity]
## What Changes
- [Bullet list of changes]
- [Mark breaking changes with **BREAKING**]
## Impact
- Affected specs: [list capabilities]
- Affected code: [key files/systems]
```
3. **Create spec deltas:** `specs/[capability]/spec.md`
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL provide...
#### Scenario: Success case
- **WHEN** user performs action
- **THEN** expected result
## MODIFIED Requirements
### Requirement: Existing Feature
[Complete modified requirement]
## REMOVED Requirements
### Requirement: Old Feature
**Reason**: [Why removing]
**Migration**: [How to handle]
```
If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
4. **Create tasks.md:**
```markdown
## 1. Implementation
- [ ] 1.1 Create database schema
- [ ] 1.2 Implement API endpoint
- [ ] 1.3 Add frontend component
- [ ] 1.4 Write tests
```
5. **Create design.md when needed:**
Create `design.md` if any of the following apply; otherwise omit it:
- Cross-cutting change (multiple services/modules) or a new architectural pattern
- New external dependency or significant data model changes
- Security, performance, or migration complexity
- Ambiguity that benefits from technical decisions before coding
Minimal `design.md` skeleton:
```markdown
## Context
[Background, constraints, stakeholders]
## Goals / Non-Goals
- Goals: [...]
- Non-Goals: [...]
## Decisions
- Decision: [What and why]
- Alternatives considered: [Options + rationale]
## Risks / Trade-offs
- [Risk] → Mitigation
## Migration Plan
[Steps, rollback]
## Open Questions
- [...]
```
## Spec File Format
### Critical: Scenario Formatting
**CORRECT** (use #### headers):
```markdown
#### Scenario: User login success
- **WHEN** valid credentials provided
- **THEN** return JWT token
```
**WRONG** (don't use bullets or bold):
```markdown
- **Scenario: User login** ❌
**Scenario**: User login ❌
### Scenario: User login ❌
```
Every requirement MUST have at least one scenario.
### Requirement Wording
- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
### Delta Operations
- `## ADDED Requirements` - New capabilities
- `## MODIFIED Requirements` - Changed behavior
- `## REMOVED Requirements` - Deprecated features
- `## RENAMED Requirements` - Name changes
Headers matched with `trim(header)` - whitespace ignored.
#### When to use ADDED vs MODIFIED
- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you arent explicitly changing the existing requirement, add a new requirement under ADDED instead.
Authoring a MODIFIED requirement correctly:
1) Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
2) Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
3) Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
4) Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
Example for RENAMED:
```markdown
## RENAMED Requirements
- FROM: `### Requirement: Login`
- TO: `### Requirement: User Authentication`
```
## Troubleshooting
### Common Errors
**"Change must have at least one delta"**
- Check `changes/[name]/specs/` exists with .md files
- Verify files have operation prefixes (## ADDED Requirements)
**"Requirement must have at least one scenario"**
- Check scenarios use `#### Scenario:` format (4 hashtags)
- Don't use bullet points or bold for scenario headers
**Silent scenario parsing failures**
- Exact format required: `#### Scenario: Name`
- Debug with: `openspec show [change] --json --deltas-only`
### Validation Tips
```bash
# Always use strict mode for comprehensive checks
openspec validate [change] --strict
# Debug delta parsing
openspec show [change] --json | jq '.deltas'
# Check specific requirement
openspec show [spec] --json -r 1
```
## Happy Path Script
```bash
# 1) Explore current state
openspec spec list --long
openspec list
# Optional full-text search:
# rg -n "Requirement:|Scenario:" openspec/specs
# rg -n "^#|Requirement:" openspec/changes
# 2) Choose change id and scaffold
CHANGE=add-two-factor-auth
mkdir -p openspec/changes/$CHANGE/{specs/auth}
printf "## Why\n...\n\n## What Changes\n- ...\n\n## Impact\n- ...\n" > openspec/changes/$CHANGE/proposal.md
printf "## 1. Implementation\n- [ ] 1.1 ...\n" > openspec/changes/$CHANGE/tasks.md
# 3) Add deltas (example)
cat > openspec/changes/$CHANGE/specs/auth/spec.md << 'EOF'
## ADDED Requirements
### Requirement: Two-Factor Authentication
Users MUST provide a second factor during login.
#### Scenario: OTP required
- **WHEN** valid credentials are provided
- **THEN** an OTP challenge is required
EOF
# 4) Validate
openspec validate $CHANGE --strict
```
## Multi-Capability Example
```
openspec/changes/add-2fa-notify/
├── proposal.md
├── tasks.md
└── specs/
├── auth/
│ └── spec.md # ADDED: Two-Factor Authentication
└── notifications/
└── spec.md # ADDED: OTP email notification
```
auth/spec.md
```markdown
## ADDED Requirements
### Requirement: Two-Factor Authentication
...
```
notifications/spec.md
```markdown
## ADDED Requirements
### Requirement: OTP Email Notification
...
```
## Best Practices
### Simplicity First
- Default to <100 lines of new code
- Single-file implementations until proven insufficient
- Avoid frameworks without clear justification
- Choose boring, proven patterns
### Complexity Triggers
Only add complexity with:
- Performance data showing current solution too slow
- Concrete scale requirements (>1000 users, >100MB data)
- Multiple proven use cases requiring abstraction
### Clear References
- Use `file.ts:42` format for code locations
- Reference specs as `specs/auth/spec.md`
- Link related changes and PRs
### Capability Naming
- Use verb-noun: `user-auth`, `payment-capture`
- Single purpose per capability
- 10-minute understandability rule
- Split if description needs "AND"
### Change ID Naming
- Use kebab-case, short and descriptive: `add-two-factor-auth`
- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
## Tool Selection Guide
| Task | Tool | Why |
|------|------|-----|
| Find files by pattern | Glob | Fast pattern matching |
| Search code content | Grep | Optimized regex search |
| Read specific files | Read | Direct file access |
| Explore unknown scope | Task | Multi-step investigation |
## Error Recovery
### Change Conflicts
1. Run `openspec list` to see active changes
2. Check for overlapping specs
3. Coordinate with change owners
4. Consider combining proposals
### Validation Failures
1. Run with `--strict` flag
2. Check JSON output for details
3. Verify spec file format
4. Ensure scenarios properly formatted
### Missing Context
1. Read project.md first
2. Check related specs
3. Review recent archives
4. Ask for clarification
## Quick Reference
### Stage Indicators
- `changes/` - Proposed, not yet built
- `specs/` - Built and deployed
- `archive/` - Completed changes
### File Purposes
- `proposal.md` - Why and what
- `tasks.md` - Implementation steps
- `design.md` - Technical decisions
- `spec.md` - Requirements and behavior
### CLI Essentials
```bash
openspec list # What's in progress?
openspec show [item] # View details
openspec diff [change] # What's changing?
openspec validate --strict # Is it correct?
openspec archive [change] [--yes|-y] # Mark complete (add --yes for automation)
```
Remember: Specs are truth. Changes are proposals. Keep them in sync.

View File

@@ -0,0 +1,194 @@
## Context
The Backbone Helm chart currently has minimal configuration support. Users cannot configure the broker through Helm values, making it difficult to deploy in production environments. The chart needs to support all configuration options from the README and follow Helm best practices for production deployments.
### Constraints
- Must maintain backward compatibility where possible
- Must align with environment variables documented in README
- Must follow Kubernetes and Helm best practices
- Storage backend (SQLite, PostgreSQL) may require persistent volumes
### Stakeholders
- Kubernetes operators deploying Backbone
- Users requiring production-grade deployments with HA, monitoring, and persistence
## Goals / Non-Goals
### Goals
- Expose all README environment variables through Helm values
- Support persistent storage for `/data` directory with configurable storage class
- Follow Helm best practices (resources, probes, security contexts, labels)
- Enable production-ready deployments with proper health checks
- Support ingress for HTTP API exposure
### Non-Goals
- StatefulSet conversion (deployment is sufficient for single-replica MQTT broker)
- Horizontal Pod Autoscaling (MQTT broker state management complexity)
- Built-in monitoring/metrics exporters (separate concern)
- Multi-replica support with Redis clustering (future enhancement)
## Decisions
### Decision: Use PVC for /data persistence
**Rationale**: The application may use SQLite or store session data in `/data`. A PVC ensures data survives pod restarts and enables backup/restore workflows.
**Alternatives considered**:
- emptyDir: Loses data on pod restart, unsuitable for production
- hostPath: Ties pod to specific node, reduces portability
- PVC (chosen): Standard Kubernetes pattern, supports storage classes, backup-friendly
**Implementation**:
- Optional PVC controlled by `persistence.enabled` flag
- Configurable storage class, size, and access mode
- Defaults to disabled for backward compatibility
### Decision: Environment variable structure in values.yaml
**Rationale**: Flatten environment variables under logical sections (config, k8s, oidc, redis) rather than deep nesting for better readability.
**Structure**:
```yaml
config:
adminToken: ''
jwtSecret: ''
httpPort: 8883
tcpPort: 1883
k8s:
enabled: true # default true since chart runs in K8s
ws:
enabled: false
api:
enabled: false
tcp:
enabled: false
oidc:
enabled: false
discovery: ''
clientId: ''
clientSecret: ''
groupField: 'groups'
adminGroup: ''
writerGroup: ''
readerGroup: ''
redis:
enabled: false
host: 'localhost'
port: 6379
password: ''
db: 0
```
### Decision: ServiceAccount template instead of hardcoded name
**Rationale**: Current deployment references `{{ .Release.Name }}` for ServiceAccount but doesn't create it. Extract to proper template with configurable name and annotations.
**Migration**: Existing deployments referencing release name continue working.
### Decision: Default K8S_ENABLED to true in chart
**Rationale**: The Helm chart is deployed TO Kubernetes, so K8s integration should default to enabled. Users can disable if running in non-operator mode.
### Decision: Security context defaults
Apply restricted security context by default:
```yaml
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
readOnlyRootFilesystem: false # /data needs write access
```
**Rationale**: Follows Kubernetes security best practices. ReadOnlyRootFilesystem disabled because SQLite needs write access to `/data`.
### Decision: Probe configuration
Add both liveness and readiness probes with sensible defaults:
- Liveness: HTTP GET `/health` on port 8883 (requires API_ENABLED)
- Readiness: HTTP GET `/health` on port 8883
- Fallback: TCP socket check on ports if API disabled
**Rationale**: Enables Kubernetes to detect unhealthy pods and route traffic appropriately.
## Risks / Trade-offs
### Risk: Breaking changes for existing deployments
**Mitigation**:
- Set conservative defaults matching current behavior where possible
- Document migration path in CHANGELOG or upgrade notes
- Version bump signals breaking changes (0.1.0 → 0.2.0)
### Risk: Complex values.yaml overwhelming users
**Mitigation**:
- Provide comprehensive comments
- Include examples in comments
- Keep sensible defaults for 90% use case
- Create example values files for common scenarios
### Risk: Storage class availability varies by cluster
**Mitigation**:
- Make storage class configurable (default: `""` uses cluster default)
- Document common storage classes in values comments
- Support disabling persistence entirely
## Migration Plan
### For existing deployments:
1. Review `values.yaml` changes
2. Set `persistence.enabled: false` to maintain stateless behavior (if desired)
3. Configure environment variables previously set via manual env overrides
4. Update service types if non-default required
5. Helm upgrade with new chart version
### Rollback:
Standard Helm rollback: `helm rollback <release> <revision>`
### Validation:
```bash
# Dry-run
helm upgrade --install backbone ./charts --dry-run --debug
# Lint
helm lint ./charts
# Template verification
helm template backbone ./charts > manifests.yaml
kubectl apply --dry-run=client -f manifests.yaml
```
## Open Questions
1. **Should probes be enabled by default?**
- Proposal: Yes, but only if `api.enabled=true`, otherwise use TCP checks
2. **Default persistence size?**
- Proposal: 1Gi for SQLite database and session data
3. **Should we support initContainers for DB migrations?**
- Proposal: No, out of scope for this change (future enhancement)
4. **Ingress class defaults?**
- Proposal: Empty string, user must specify their ingress class

View File

@@ -0,0 +1,24 @@
## Why
The current Helm chart lacks configuration flexibility and best practices support. It does not expose the environment variables documented in the README, missing PersistentVolume support for data persistence, and lacks standard Helm values patterns (resources, nodeSelector, tolerations, etc.).
## What Changes
- Add comprehensive values structure for all configuration options documented in README
- Add PersistentVolumeClaim configuration with configurable storage class for `/data` mount
- Add standard Helm best practices: resource limits/requests, node selectors, tolerations, affinity, security contexts
- Add proper labels and annotations following Helm conventions
- Add liveness and readiness probes for container health checks
- Add ServiceAccount template (currently hardcoded in deployment)
- Make service types and configurations customizable
- Add ingress support for HTTP API endpoint
## Impact
- Affected specs: `helm-deployment` (new capability)
- Affected code:
- `charts/values.yaml` - Complete restructure with backward compatibility
- `charts/templates/deployment.yaml` - Add volume mounts, env vars, probes, security
- `charts/templates/services.yaml` - Make service types configurable
- `charts/templates/*.yaml` - Add missing templates (serviceaccount, pvc, ingress)
- `charts/Chart.yaml` - Version bump to reflect changes

View File

@@ -0,0 +1,278 @@
## ADDED Requirements
### Requirement: Configuration Values Structure
The Helm chart SHALL provide a comprehensive values.yaml structure that exposes all configuration options documented in the README.
#### Scenario: All environment variables configurable
- **WHEN** a user deploys the chart
- **THEN** values.yaml MUST include sections for: config (adminToken, jwtSecret, ports), k8s (enabled), ws (enabled), api (enabled), tcp (enabled), oidc (all 8 variables), and redis (all 5 variables)
#### Scenario: Default values match README defaults
- **WHEN** a user deploys without custom values
- **THEN** environment variables MUST default to values documented in README (e.g., K8S_ENABLED=true in K8s context, HTTP_PORT=8883, TCP_PORT=1883)
### Requirement: Persistent Volume Support
The chart SHALL support optional persistent storage for the `/data` directory with configurable storage class.
#### Scenario: Enable persistence with default storage class
- **WHEN** `persistence.enabled=true` is set
- **THEN** a PersistentVolumeClaim MUST be created and mounted to `/data` in the container
#### Scenario: Custom storage class
- **WHEN** `persistence.storageClass` is specified
- **THEN** the PVC MUST request that storage class
#### Scenario: Configurable volume size
- **WHEN** `persistence.size` is specified
- **THEN** the PVC MUST request that storage size (default: 1Gi)
#### Scenario: Persistence disabled by default
- **WHEN** no persistence configuration is provided
- **THEN** no PVC MUST be created and deployment uses emptyDir or no volume
### Requirement: Resource Management
The chart SHALL support Kubernetes resource limits and requests configuration.
#### Scenario: Resource limits configurable
- **WHEN** `resources.limits.cpu` or `resources.limits.memory` are set
- **THEN** the deployment MUST include these resource limits
#### Scenario: Resource requests configurable
- **WHEN** `resources.requests.cpu` or `resources.requests.memory` are set
- **THEN** the deployment MUST include these resource requests
#### Scenario: Default resources
- **WHEN** no resources are specified
- **THEN** the deployment MUST NOT set resource constraints (Kubernetes default behavior)
### Requirement: Pod Scheduling
The chart SHALL support standard Kubernetes pod scheduling options.
#### Scenario: Node selector support
- **WHEN** `nodeSelector` values are provided
- **THEN** the deployment MUST include the node selector configuration
#### Scenario: Tolerations support
- **WHEN** `tolerations` array is provided
- **THEN** the deployment MUST include the tolerations
#### Scenario: Affinity support
- **WHEN** `affinity` configuration is provided
- **THEN** the deployment MUST include the affinity rules
### Requirement: Health Probes
The chart SHALL support configurable liveness and readiness probes.
#### Scenario: HTTP probes when API enabled
- **WHEN** `api.enabled=true` and probes are enabled
- **THEN** liveness and readiness probes MUST use HTTP GET on `/health` endpoint
#### Scenario: TCP probes as fallback
- **WHEN** `api.enabled=false` and probes are enabled
- **THEN** liveness and readiness probes MUST use TCP socket checks on configured ports
#### Scenario: Configurable probe parameters
- **WHEN** probe values (`initialDelaySeconds`, `periodSeconds`, `timeoutSeconds`) are set
- **THEN** the deployment MUST use these probe configurations
#### Scenario: Probes can be disabled
- **WHEN** `livenessProbe.enabled=false` or `readinessProbe.enabled=false`
- **THEN** the respective probe MUST be omitted from the deployment
### Requirement: Security Context
The chart SHALL support security context configuration following Kubernetes security best practices.
#### Scenario: Pod security context
- **WHEN** `podSecurityContext` values are provided
- **THEN** the deployment MUST apply these security settings at pod level
#### Scenario: Container security context
- **WHEN** `securityContext` values are provided
- **THEN** the deployment MUST apply these security settings at container level
#### Scenario: Default security settings
- **WHEN** no security context is specified
- **THEN** the deployment SHOULD use secure defaults (runAsNonRoot, non-root UID)
### Requirement: Service Configuration
The chart SHALL support configurable service types and settings for both HTTP and TCP services.
#### Scenario: HTTP service type configurable
- **WHEN** `service.http.type` is set to LoadBalancer, ClusterIP, or NodePort
- **THEN** the HTTP service MUST use that service type
#### Scenario: TCP service type configurable
- **WHEN** `service.tcp.type` is set to LoadBalancer, ClusterIP, or NodePort
- **THEN** the TCP service MUST use that service type
#### Scenario: Service annotations
- **WHEN** `service.http.annotations` or `service.tcp.annotations` are provided
- **THEN** the respective services MUST include those annotations
#### Scenario: Service ports configurable
- **WHEN** `service.http.port` or `service.tcp.port` are specified
- **THEN** the services MUST expose those external ports (targeting container ports from config)
### Requirement: ServiceAccount Management
The chart SHALL create and manage a ServiceAccount for the deployment with configurable name and annotations.
#### Scenario: ServiceAccount creation
- **WHEN** the chart is deployed
- **THEN** a ServiceAccount resource MUST be created
#### Scenario: ServiceAccount name configurable
- **WHEN** `serviceAccount.name` is specified
- **THEN** the ServiceAccount and deployment MUST use that name
#### Scenario: ServiceAccount annotations
- **WHEN** `serviceAccount.annotations` are provided
- **THEN** the ServiceAccount MUST include those annotations (useful for IRSA, Workload Identity)
### Requirement: Ingress Support
The chart SHALL support optional Ingress configuration for exposing the HTTP API.
#### Scenario: Ingress creation
- **WHEN** `ingress.enabled=true`
- **THEN** an Ingress resource MUST be created
#### Scenario: Ingress host configuration
- **WHEN** `ingress.hosts` array is provided
- **THEN** the Ingress MUST include rules for those hosts
#### Scenario: Ingress TLS
- **WHEN** `ingress.tls` configuration is provided
- **THEN** the Ingress MUST include TLS settings with specified secret names
#### Scenario: Ingress class
- **WHEN** `ingress.className` is specified
- **THEN** the Ingress MUST use that ingress class
#### Scenario: Ingress annotations
- **WHEN** `ingress.annotations` are provided
- **THEN** the Ingress MUST include those annotations (e.g., for cert-manager, nginx settings)
### Requirement: Labels and Annotations
The chart SHALL apply standard Helm and Kubernetes labels following best practices.
#### Scenario: Standard labels applied
- **WHEN** resources are created
- **THEN** they MUST include labels: `app.kubernetes.io/name`, `app.kubernetes.io/instance`, `app.kubernetes.io/version`, `app.kubernetes.io/managed-by`
#### Scenario: Custom labels support
- **WHEN** `commonLabels` are defined in values
- **THEN** all resources MUST include these additional labels
#### Scenario: Custom annotations support
- **WHEN** `commonAnnotations` are defined in values
- **THEN** all resources MUST include these additional annotations
### Requirement: Environment Variable Mapping
The chart SHALL correctly map all values.yaml configuration to container environment variables matching README documentation.
#### Scenario: Admin and JWT configuration
- **WHEN** `config.adminToken` or `config.jwtSecret` are set
- **THEN** environment variables `ADMIN_TOKEN` and `JWT_SECRET` MUST be set in the container
#### Scenario: Feature toggles
- **WHEN** `k8s.enabled`, `ws.enabled`, `api.enabled`, or `tcp.enabled` are set
- **THEN** corresponding environment variables `K8S_ENABLED`, `WS_ENABLED`, `API_ENABLED`, `TCP_ENABLED` MUST be set as string "true" or "false"
#### Scenario: Port configuration
- **WHEN** `config.httpPort` or `config.tcpPort` are set
- **THEN** environment variables `HTTP_PORT` and `TCP_PORT` MUST be set
#### Scenario: OIDC configuration
- **WHEN** OIDC values (`oidc.enabled`, `oidc.discovery`, etc.) are provided
- **THEN** all 8 OIDC environment variables MUST be set correctly
#### Scenario: Redis configuration
- **WHEN** Redis values (`redis.enabled`, `redis.host`, etc.) are provided
- **THEN** all 5 Redis environment variables MUST be set correctly
#### Scenario: Sensitive values from secrets
- **WHEN** `config.jwtSecret`, `config.adminToken`, `oidc.clientSecret`, or `redis.password` reference existing secrets
- **THEN** the deployment MUST use valueFrom/secretKeyRef to inject these values
### Requirement: Template Syntax Correctness
The chart templates SHALL use correct Helm template syntax without errors.
#### Scenario: Valid Go template syntax
- **WHEN** templates are rendered with `helm template`
- **THEN** no syntax errors MUST occur
#### Scenario: No spacing in template delimiters
- **WHEN** examining template files
- **THEN** template expressions MUST use `{{` and `}}` without internal spaces (e.g., `{{ .Value }}` not `{ { .Value } }`)
### Requirement: Chart Validation
The chart SHALL pass Helm linting and validation checks.
#### Scenario: Helm lint passes
- **WHEN** `helm lint` is run on the chart
- **THEN** no errors MUST be reported
#### Scenario: Chart renders successfully
- **WHEN** `helm template` is run with default values
- **THEN** valid Kubernetes manifests MUST be produced
#### Scenario: Chart renders with custom values
- **WHEN** `helm template` is run with various custom values combinations
- **THEN** valid Kubernetes manifests MUST be produced without errors

View File

@@ -0,0 +1,41 @@
## 1. Update Values Schema
- [x] 1.1 Restructure `values.yaml` with comprehensive configuration sections
- [x] 1.2 Add all environment variable mappings from README
- [x] 1.3 Add persistence configuration with storage class options
- [x] 1.4 Add standard Helm values (resources, nodeSelector, tolerations, affinity)
- [x] 1.5 Add probe configurations (liveness, readiness)
- [x] 1.6 Add service configuration options
- [x] 1.7 Add ingress configuration
## 2. Update Deployment Template
- [x] 2.1 Add all environment variables from values
- [x] 2.2 Add PVC volume mount to `/data`
- [x] 2.3 Add resource limits and requests
- [x] 2.4 Add node selector, tolerations, and affinity
- [x] 2.5 Add security context configurations
- [x] 2.6 Add liveness and readiness probes
- [x] 2.7 Add proper labels and annotations
- [x] 2.8 Fix template syntax issues (remove spaces in braces)
## 3. Create Missing Templates
- [x] 3.1 Create `serviceaccount.yaml` template
- [x] 3.2 Create `persistentvolumeclaim.yaml` template
- [x] 3.3 Create `ingress.yaml` template (optional, controlled by values)
- [x] 3.4 Update `clusterrolebinding.yaml` to reference ServiceAccount template
## 4. Update Service Templates
- [x] 4.1 Make HTTP service type configurable (ClusterIP/LoadBalancer/NodePort)
- [x] 4.2 Make TCP service type configurable
- [x] 4.3 Add service annotations support
- [x] 4.4 Add proper labels following Helm conventions
## 5. Documentation and Validation
- [x] 5.1 Update `Chart.yaml` version (bump to 0.2.0)
- [x] 5.2 Add comments to `values.yaml` explaining options
- [x] 5.3 Test chart rendering with `helm template`
- [x] 5.4 Validate against Helm best practices using `helm lint`

104
openspec/project.md Normal file
View File

@@ -0,0 +1,104 @@
# Project Context
## Purpose
Backbone is a Kubernetes-native MQTT broker with fine-grained access control and topic validation. It provides declarative configuration through Custom Resource Definitions (CRDs), JWT-based authentication with multiple providers, and AWS IAM-style statement-based authorization for MQTT operations. The broker supports multiple transport protocols (TCP, WebSocket) and includes a RESTful API for management.
## Tech Stack
- **Runtime**: Node.js 23+, TypeScript 5.9
- **Package Manager**: pnpm 10.18
- **MQTT Broker**: Aedes (v0.51) with Redis persistence support
- **HTTP Framework**: Fastify 5.6 with WebSocket plugin
- **Kubernetes Client**: @kubernetes/client-node
- **Authentication**: jsonwebtoken, OIDC support
- **Validation**: Zod 4.1 for schema validation
- **Database**: Knex with PostgreSQL and SQLite support
- **Testing**: Vitest 3.2 with coverage
- **Code Quality**: ESLint 9.37, Prettier 3.6, TypeScript strict mode
## Project Conventions
### Code Style
- **NO default exports** - use named exports only (`export { ClassName }`)
- **Type definitions**: use `type`, NOT `interface` (`type Foo = { ... }`)
- **File extensions**: always include `.ts` in imports (`from './file.ts'`)
- **Import paths**: use `#root/*` alias for src/ (`#root/utils/services.ts`)
- **Import order**: builtin → external → internal → parent → sibling → index (newlines between groups)
- **Private fields**: use `#` prefix for private class members (`#services: Services`)
- **Formatting**: 120 char width, single quotes, 2 spaces, semicolons, trailing commas
- **Exports**: exports must be last in file (`import/exports-last` rule)
- **NO comments** unless explicitly requested
### Architecture Patterns
- **Dependency Injection**: Services container pattern - all classes accept `services: Services` in constructor, access via `this.#services.get(ClassName)`
- **Configuration**: Centralized `Config` class using environment variables
- **Authentication**: Pluggable provider system via `SessionProvider` supporting multiple auth methods (K8s, OIDC, JWT, Admin)
- **Authorization**: Statement-based policies (similar to AWS IAM) with effect/resources/actions structure
- **Validation**: Zod schemas for all data validation (see `*.schemas.ts` files)
- **Event-driven**: Custom event emitter for broker events and K8s resource changes
- **CRD Pattern**: Kubernetes operator watches Client and Topic CRDs for declarative configuration
### Testing Strategy
- **Framework**: Vitest with coverage via @vitest/coverage-v8
- **Test command**: `pnpm test:unit` (all tests) or `pnpm test:unit tests/mqtt.test.ts` (specific file)
- **Pre-commit checks**: MUST run `pnpm build` (TypeScript compilation) and `pnpm test:lint` (ESLint) before committing
- **Test location**: `tests/` directory with `tests/utils/` for test utilities
- **Coverage**: Enabled via Vitest configuration
- **NEVER assume test framework** - always check package.json or README for test commands
### Git Workflow
- **CI/CD**: GitHub Actions with auto-labeler, build jobs, draft releases
- **Release Management**: Automated draft releases via release-drafter
- **Commit Requirements**: Must pass `pnpm build` and `pnpm test:lint` before committing
- **NEVER commit unless explicitly requested** by the user
## Domain Context
### MQTT Concepts
- **Actions**: `mqtt:publish`, `mqtt:subscribe`, `mqtt:read`
- **Topic Patterns**: Supports wildcards (`*` single-level, `**` multi-level)
- **QoS Levels**: 0 (at most once), 1 (at least once), 2 (exactly once)
- **Transports**: TCP (port 1883), WebSocket (ws://host:8883/ws)
### Authorization Model
- **Statements**: Array of `{ effect: 'allow' | 'deny', resources: string[], actions: string[] }`
- **Resource Format**: `mqtt:topic/pattern` or `*` for all
- **Evaluation**: Deny-by-default, explicit allow required, deny overrides allow
### Kubernetes Resources
- **Client CRD**: Defines MQTT client access policies with statement-based authorization
- **Topic CRD**: Configures topic validation rules (maxMessageSize, allowedQoS, patterns)
- **Namespace-scoped**: Resources are namespace-aware for multi-tenancy
### Authentication Providers
- **K8sAuth**: Kubernetes ServiceAccount token authentication
- **OidcAuth**: OpenID Connect with configurable discovery, client credentials, group-based authorization
- **JwtAuth**: Custom JWT tokens with configurable secret
- **AdminAuth**: Static admin token for management operations
## Important Constraints
- **Strict TypeScript**: No implicit any, null checks required
- **No default exports**: Enforced by ESLint
- **Import extensions**: Must include `.ts` in all imports
- **Private field syntax**: Must use `#` prefix, not `private` keyword
- **Services pattern**: All dependencies must go through Services container
- **Validation required**: All external data must be validated via Zod schemas
- **Pre-commit checks**: Build and lint must pass before committing
## External Dependencies
- **Kubernetes API**: When K8S_ENABLED=true, connects to K8s API to watch Client/Topic CRDs
- **Redis**: Optional persistence layer when REDIS_ENABLED=true (Aedes persistence)
- **OIDC Provider**: When OIDC_ENABLED=true, requires OIDC discovery endpoint
- **PostgreSQL/SQLite**: Database support via Knex (configurable)
- **GitHub Actions**: CI/CD pipeline for build, lint, and release management

View File

@@ -0,0 +1,282 @@
# helm-deployment Specification
## Purpose
TBD - created by archiving change update-helm-chart-best-practices. Update Purpose after archive.
## Requirements
### Requirement: Configuration Values Structure
The Helm chart SHALL provide a comprehensive values.yaml structure that exposes all configuration options documented in the README.
#### Scenario: All environment variables configurable
- **WHEN** a user deploys the chart
- **THEN** values.yaml MUST include sections for: config (adminToken, jwtSecret, ports), k8s (enabled), ws (enabled), api (enabled), tcp (enabled), oidc (all 8 variables), and redis (all 5 variables)
#### Scenario: Default values match README defaults
- **WHEN** a user deploys without custom values
- **THEN** environment variables MUST default to values documented in README (e.g., K8S_ENABLED=true in K8s context, HTTP_PORT=8883, TCP_PORT=1883)
### Requirement: Persistent Volume Support
The chart SHALL support optional persistent storage for the `/data` directory with configurable storage class.
#### Scenario: Enable persistence with default storage class
- **WHEN** `persistence.enabled=true` is set
- **THEN** a PersistentVolumeClaim MUST be created and mounted to `/data` in the container
#### Scenario: Custom storage class
- **WHEN** `persistence.storageClass` is specified
- **THEN** the PVC MUST request that storage class
#### Scenario: Configurable volume size
- **WHEN** `persistence.size` is specified
- **THEN** the PVC MUST request that storage size (default: 1Gi)
#### Scenario: Persistence disabled by default
- **WHEN** no persistence configuration is provided
- **THEN** no PVC MUST be created and deployment uses emptyDir or no volume
### Requirement: Resource Management
The chart SHALL support Kubernetes resource limits and requests configuration.
#### Scenario: Resource limits configurable
- **WHEN** `resources.limits.cpu` or `resources.limits.memory` are set
- **THEN** the deployment MUST include these resource limits
#### Scenario: Resource requests configurable
- **WHEN** `resources.requests.cpu` or `resources.requests.memory` are set
- **THEN** the deployment MUST include these resource requests
#### Scenario: Default resources
- **WHEN** no resources are specified
- **THEN** the deployment MUST NOT set resource constraints (Kubernetes default behavior)
### Requirement: Pod Scheduling
The chart SHALL support standard Kubernetes pod scheduling options.
#### Scenario: Node selector support
- **WHEN** `nodeSelector` values are provided
- **THEN** the deployment MUST include the node selector configuration
#### Scenario: Tolerations support
- **WHEN** `tolerations` array is provided
- **THEN** the deployment MUST include the tolerations
#### Scenario: Affinity support
- **WHEN** `affinity` configuration is provided
- **THEN** the deployment MUST include the affinity rules
### Requirement: Health Probes
The chart SHALL support configurable liveness and readiness probes.
#### Scenario: HTTP probes when API enabled
- **WHEN** `api.enabled=true` and probes are enabled
- **THEN** liveness and readiness probes MUST use HTTP GET on `/health` endpoint
#### Scenario: TCP probes as fallback
- **WHEN** `api.enabled=false` and probes are enabled
- **THEN** liveness and readiness probes MUST use TCP socket checks on configured ports
#### Scenario: Configurable probe parameters
- **WHEN** probe values (`initialDelaySeconds`, `periodSeconds`, `timeoutSeconds`) are set
- **THEN** the deployment MUST use these probe configurations
#### Scenario: Probes can be disabled
- **WHEN** `livenessProbe.enabled=false` or `readinessProbe.enabled=false`
- **THEN** the respective probe MUST be omitted from the deployment
### Requirement: Security Context
The chart SHALL support security context configuration following Kubernetes security best practices.
#### Scenario: Pod security context
- **WHEN** `podSecurityContext` values are provided
- **THEN** the deployment MUST apply these security settings at pod level
#### Scenario: Container security context
- **WHEN** `securityContext` values are provided
- **THEN** the deployment MUST apply these security settings at container level
#### Scenario: Default security settings
- **WHEN** no security context is specified
- **THEN** the deployment SHOULD use secure defaults (runAsNonRoot, non-root UID)
### Requirement: Service Configuration
The chart SHALL support configurable service types and settings for both HTTP and TCP services.
#### Scenario: HTTP service type configurable
- **WHEN** `service.http.type` is set to LoadBalancer, ClusterIP, or NodePort
- **THEN** the HTTP service MUST use that service type
#### Scenario: TCP service type configurable
- **WHEN** `service.tcp.type` is set to LoadBalancer, ClusterIP, or NodePort
- **THEN** the TCP service MUST use that service type
#### Scenario: Service annotations
- **WHEN** `service.http.annotations` or `service.tcp.annotations` are provided
- **THEN** the respective services MUST include those annotations
#### Scenario: Service ports configurable
- **WHEN** `service.http.port` or `service.tcp.port` are specified
- **THEN** the services MUST expose those external ports (targeting container ports from config)
### Requirement: ServiceAccount Management
The chart SHALL create and manage a ServiceAccount for the deployment with configurable name and annotations.
#### Scenario: ServiceAccount creation
- **WHEN** the chart is deployed
- **THEN** a ServiceAccount resource MUST be created
#### Scenario: ServiceAccount name configurable
- **WHEN** `serviceAccount.name` is specified
- **THEN** the ServiceAccount and deployment MUST use that name
#### Scenario: ServiceAccount annotations
- **WHEN** `serviceAccount.annotations` are provided
- **THEN** the ServiceAccount MUST include those annotations (useful for IRSA, Workload Identity)
### Requirement: Ingress Support
The chart SHALL support optional Ingress configuration for exposing the HTTP API.
#### Scenario: Ingress creation
- **WHEN** `ingress.enabled=true`
- **THEN** an Ingress resource MUST be created
#### Scenario: Ingress host configuration
- **WHEN** `ingress.hosts` array is provided
- **THEN** the Ingress MUST include rules for those hosts
#### Scenario: Ingress TLS
- **WHEN** `ingress.tls` configuration is provided
- **THEN** the Ingress MUST include TLS settings with specified secret names
#### Scenario: Ingress class
- **WHEN** `ingress.className` is specified
- **THEN** the Ingress MUST use that ingress class
#### Scenario: Ingress annotations
- **WHEN** `ingress.annotations` are provided
- **THEN** the Ingress MUST include those annotations (e.g., for cert-manager, nginx settings)
### Requirement: Labels and Annotations
The chart SHALL apply standard Helm and Kubernetes labels following best practices.
#### Scenario: Standard labels applied
- **WHEN** resources are created
- **THEN** they MUST include labels: `app.kubernetes.io/name`, `app.kubernetes.io/instance`, `app.kubernetes.io/version`, `app.kubernetes.io/managed-by`
#### Scenario: Custom labels support
- **WHEN** `commonLabels` are defined in values
- **THEN** all resources MUST include these additional labels
#### Scenario: Custom annotations support
- **WHEN** `commonAnnotations` are defined in values
- **THEN** all resources MUST include these additional annotations
### Requirement: Environment Variable Mapping
The chart SHALL correctly map all values.yaml configuration to container environment variables matching README documentation.
#### Scenario: Admin and JWT configuration
- **WHEN** `config.adminToken` or `config.jwtSecret` are set
- **THEN** environment variables `ADMIN_TOKEN` and `JWT_SECRET` MUST be set in the container
#### Scenario: Feature toggles
- **WHEN** `k8s.enabled`, `ws.enabled`, `api.enabled`, or `tcp.enabled` are set
- **THEN** corresponding environment variables `K8S_ENABLED`, `WS_ENABLED`, `API_ENABLED`, `TCP_ENABLED` MUST be set as string "true" or "false"
#### Scenario: Port configuration
- **WHEN** `config.httpPort` or `config.tcpPort` are set
- **THEN** environment variables `HTTP_PORT` and `TCP_PORT` MUST be set
#### Scenario: OIDC configuration
- **WHEN** OIDC values (`oidc.enabled`, `oidc.discovery`, etc.) are provided
- **THEN** all 8 OIDC environment variables MUST be set correctly
#### Scenario: Redis configuration
- **WHEN** Redis values (`redis.enabled`, `redis.host`, etc.) are provided
- **THEN** all 5 Redis environment variables MUST be set correctly
#### Scenario: Sensitive values from secrets
- **WHEN** `config.jwtSecret`, `config.adminToken`, `oidc.clientSecret`, or `redis.password` reference existing secrets
- **THEN** the deployment MUST use valueFrom/secretKeyRef to inject these values
### Requirement: Template Syntax Correctness
The chart templates SHALL use correct Helm template syntax without errors.
#### Scenario: Valid Go template syntax
- **WHEN** templates are rendered with `helm template`
- **THEN** no syntax errors MUST occur
#### Scenario: No spacing in template delimiters
- **WHEN** examining template files
- **THEN** template expressions MUST use `{{` and `}}` without internal spaces (e.g., `{{ .Value }}` not `{ { .Value } }`)
### Requirement: Chart Validation
The chart SHALL pass Helm linting and validation checks.
#### Scenario: Helm lint passes
- **WHEN** `helm lint` is run on the chart
- **THEN** no errors MUST be reported
#### Scenario: Chart renders successfully
- **WHEN** `helm template` is run with default values
- **THEN** valid Kubernetes manifests MUST be produced
#### Scenario: Chart renders with custom values
- **WHEN** `helm template` is run with various custom values combinations
- **THEN** valid Kubernetes manifests MUST be produced without errors

32
skaffold.yaml Normal file
View File

@@ -0,0 +1,32 @@
apiVersion: skaffold/v4beta7
kind: Config
metadata:
name: backbone
build:
artifacts:
- image: zot.olsen.cloud/backbone
context: .
docker:
dockerfile: Dockerfile
manifests:
helm:
releases:
- name: backbone
chartPath: chart
namespace: homelab
setValueTemplates:
image.repository: 'zot.local/backbone'
image.tag: '{{.IMAGE_TAG_zot_olsen_cloud_backbone}}@{{.IMAGE_DIGEST_zot_olsen_cloud_backbone}}'
httpService:
enabled: true
environment: prod
service:
tcp:
type: LoadBalancer
port: 1884
deploy:
# Use kubectl to apply the manifests.
kubectl: {}

View File

@@ -1,34 +1,16 @@
import { type FastifyPluginAsync } from 'fastify';
import { z } from 'zod';
import { manageEndpoints } from './endpoints/endpoints.manage.ts';
import { authPlugin } from './plugins/plugins.auth.ts';
import { messageEndpoints } from './endpoints/endpoints.message.ts';
const api: FastifyPluginAsync = async (fastify) => {
fastify.route({
method: 'get',
url: '/health',
schema: {
operationId: 'health.get',
summary: 'Get health status',
tags: ['system'],
response: {
200: z.object({
status: z.literal('ok'),
}),
},
},
handler: () => {
return { status: 'ok' };
},
});
await authPlugin(fastify, {});
const api: FastifyPluginAsync = async (app) => {
await authPlugin(app, {});
await fastify.register(manageEndpoints, {
await app.register(manageEndpoints, {
prefix: '/manage',
});
await fastify.register(messageEndpoints, {
await app.register(messageEndpoints, {
prefix: '/message',
});
};

View File

@@ -1,13 +1,9 @@
import type { FastifyPluginAsyncZod } from 'fastify-type-provider-zod';
import { z } from 'zod';
import { Config } from '#root/config/config.ts';
import { MqttServer } from '#root/server/server.ts';
const messageEndpoints: FastifyPluginAsyncZod = async (fastify) => {
const config = fastify.services.get(Config);
if (config.jwtSecret) {
fastify.route({
method: 'post',
url: '',
@@ -57,7 +53,6 @@ const messageEndpoints: FastifyPluginAsyncZod = async (fastify) => {
reply.send({ success: true });
},
});
}
};
export { messageEndpoints };

View File

@@ -18,6 +18,7 @@ import { createWebSocketStream } from 'ws';
import fastifySensible from '@fastify/sensible';
import redis from 'aedes-persistence-redis';
import memory from 'aedes-persistence';
import { z } from 'zod';
import { api } from '../api/api.ts';
@@ -174,6 +175,23 @@ class MqttServer {
prefix: '/api',
});
}
http.route({
method: 'get',
url: '/health',
schema: {
operationId: 'health.get',
summary: 'Get health status',
tags: ['system'],
response: {
200: z.object({
status: z.literal('ok'),
}),
},
},
handler: () => {
return { status: 'ok' };
},
});
if (config.ws.enabled) {
http.get('/ws', { websocket: true }, (socket, req) => {
const stream = createWebSocketStream(socket);