20 Commits

Author SHA1 Message Date
Morten Olsen
44b400797c devops article 2026-01-26 11:06:12 +01:00
Morten Olsen
4235a5bcdf improved the opening for clubhouse protocol 2026-01-12 14:18:43 +01:00
Morten Olsen
67ab1b349d add clubhouse protocol article 2026-01-12 13:44:17 +01:00
Morten Olsen
e0619692fc add hyperconnect article 2026-01-12 11:14:11 +01:00
Morten Olsen
f8c17202a5 edits to agentsmd article 2026-01-10 16:38:18 +01:00
Morten Olsen
307e987558 add agentsmd article 2026-01-10 16:25:45 +01:00
Morten Olsen
5e1324adf5 add reading time 2026-01-10 11:46:34 +00:00
Morten Olsen
86a9dc2d31 docs: add AGENTS.md for agent guidelines 2026-01-10 11:46:34 +00:00
Morten Olsen
e38756d521 minor ui fixes 2026-01-10 08:46:49 +01:00
mortenolsenzn
ff6e988125 docs: add simple service pattern (#1)
Co-authored-by: Morten Olsen <fbtijfdq@void.black>
2026-01-09 10:54:37 +01:00
Morten Olsen
2ecf98876a feat: add tracking 2025-12-04 10:53:34 +01:00
Morten Olsen
bbb524ea92 fix: 100 on lighthouse 🥳 2025-12-03 08:51:44 +01:00
Morten Olsen
06564dff21 rewrite 2025-12-02 23:05:56 +01:00
Morten Olsen
1693a2620c added techstack 2025-09-23 23:13:04 +02:00
Morten Olsen
10886b40f5 fix: updates 2025-09-23 22:59:56 +02:00
Morten Olsen
4acf4093ec Update index.mdx 2025-09-22 07:52:55 +02:00
Morten Olsen
6f6e970a1e format fixes 2025-09-18 22:34:45 +02:00
Morten Olsen
2f42d69e12 add more links 2025-09-18 22:29:16 +02:00
Morten Olsen
185d9298a4 fix a method inaccuracy 2025-09-18 22:11:07 +02:00
Morten Olsen
10101fb30d npm security article 2025-09-18 21:35:57 +02:00
127 changed files with 5842 additions and 9803 deletions

View File

@@ -1,3 +0,0 @@
/node_modules/
/.astro/
/.vscode/

View File

@@ -1,24 +0,0 @@
/** @type {import("eslint").Linter.Config} */
module.exports = {
extends: ['plugin:astro/recommended'],
parser: '@typescript-eslint/parser',
parserOptions: {
tsconfigRootDir: __dirname,
sourceType: 'module',
ecmaVersion: 'latest'
},
overrides: [
{
files: ['*.astro'],
parser: 'astro-eslint-parser',
parserOptions: {
parser: '@typescript-eslint/parser',
extraFileExtensions: ['.astro']
},
rules: {
// override/add rules settings here, such as:
// "astro/no-set-html-directive": "error"
}
}
]
}

View File

@@ -27,11 +27,11 @@ jobs:
- name: Setup pnpm
uses: pnpm/action-setup@v3
with:
version: 8
version: "10.18"
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: "20"
node-version: "23"
cache: "pnpm"
- name: Setup Pages
id: pages

6
.gitignore vendored
View File

@@ -1,6 +1,6 @@
/.pnpm-store/
# build output
dist/
# generated types
.astro/
@@ -13,10 +13,12 @@ yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
# environment variables
.env
.env.production
# macOS-specific files
.DS_Store
# jetbrains setting folder
.idea/

View File

@@ -1 +0,0 @@
pnpm lint-staged

View File

@@ -1,13 +0,0 @@
/** @type {import("prettier").Config} */
module.exports = {
...require('prettier-config-standard'),
plugins: [require.resolve('prettier-plugin-astro')],
overrides: [
{
files: '*.astro',
options: {
parser: 'astro'
}
}
]
}

View File

@@ -1,4 +1,4 @@
{
"recommendations": ["astro-build.astro-vscode", "esbenp.prettier-vscode"],
"recommendations": ["astro-build.astro-vscode"],
"unwantedRecommendations": []
}

14
.vscode/settings.json vendored
View File

@@ -1,14 +0,0 @@
{
"typescript.tsdk": "node_modules/typescript/lib",
"eslint.validate": [
"javascript",
"javascriptreact",
"astro",
"typescript",
"typescriptreact"
],
"prettier.documentSelectors": ["**/*.astro"],
"[astro]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}

153
AGENTS.md Normal file
View File

@@ -0,0 +1,153 @@
# Agent Guidelines for this Repository
> **Important**: This file is the source of truth for all agents working in this repository. If you modify the build process, add new tools, change the architecture, or introduce new conventions, **you must update this file** to reflect those changes.
## 1. Project Overview & Commands
This is an **Astro** project using **pnpm**.
### Build & Run Commands
- **Install Dependencies**: `pnpm install`
- **Development Server**: `pnpm dev`
- **Build for Production**: `pnpm build` (Output: `dist/`)
- **Preview Production Build**: `pnpm preview`
- **Check Types**: `npx tsc --noEmit` (since `strict` is enabled)
### Testing & Linting
- **Linting**: No explicit lint command found in `package.json`. Follow existing code style strictly.
- **Testing**: No explicit test framework (Jest/Vitest) is currently configured.
- *If asked to add tests*: Verify if a test runner needs to be installed first.
## 2. Directory Structure
```text
/
├── public/ # Static assets
├── src/
│ ├── assets/ # Source assets (processed by Astro)
│ ├── components/ # Astro components
│ │ ├── base/ # UI Primitives
│ │ └── page/ # Layout-specific components (Header, Footer)
│ ├── content/ # Markdown/MDX content collections
│ │ ├── posts/
│ │ ├── experiences/
│ │ └── skills/
│ ├── data/ # Data access layer (Classes & Utilities)
│ ├── layouts/ # Page layouts (if any)
│ ├── pages/ # File-based routing
│ ├── styles/ # Global styles (if any)
│ └── utils/ # Helper functions
├── astro.config.ts # Astro configuration
├── package.json # Dependencies & Scripts
└── tsconfig.json # TypeScript configuration
```
## 3. Code Style & Conventions
### General Formatting
- **Indentation**: 2 spaces.
- **Semicolons**: **Preferred**. While mixed in some older files, new code should use semicolons.
- **Quotes**:
- **Code**: Single quotes `'string'`.
- **Imports**: Double quotes `"package"` or single quotes `'./file'` (mixed, generally follow file context).
- **JSX/Attributes**: Double quotes `<Component prop="value" />`.
- **Line Endings**: LF.
### TypeScript & JavaScript
- **Strictness**: `strictNullChecks` is enabled via `astro/tsconfigs/strict`.
- **Path Aliases**:
- Use `~/` to refer to `src/` (e.g., `import { data } from '~/data/data'`).
- **Naming**:
- Components/Classes: `PascalCase` (e.g., `Header.astro`, `Posts`).
- Variables/Functions: `camelCase` (e.g., `getPublished`).
- Constants: `camelCase` or `UPPER_CASE`.
- Private Fields: Use JS private fields `#field` over TypeScript `private` keyword where possible.
- **Error Handling**: Use `try...catch` or explicit checks (e.g., `if (!entry) throw new Error(...)`).
### Astro Components (.astro)
- **Structure**:
```astro
---
// Imports
import { Picture } from "astro:assets";
import { data } from "~/data/data";
// Logic (Top-level await supported)
const { title } = Astro.props;
const posts = await data.posts.getPublished();
---
<!-- Template -->
<div class="container">
<h1>{title}</h1>
{posts.map(post => <a href={post.slug}>{post.data.title}</a>)}
</div>
<style>
/* Scoped CSS */
.container {
max-width: var(--content-width);
}
/* Nesting is supported and encouraged */
.parent {
.child { color: red; }
}
</style>
```
- **Images**: Use `<Picture />` from `astro:assets` for optimized images.
- **CSS**:
- Use scoped `<style>` blocks at the bottom of the file.
- **Variables**: Use CSS variables for theming (e.g., `var(--content-width)`, `var(--t-fg)`).
## 4. Architecture & Patterns
### Data Access Layer (`src/data/`)
The project uses a dedicated data access layer instead of querying collections directly in components.
- **Pattern**:
- Data logic is encapsulated in classes (e.g., `class Posts`).
- These classes wrap `getCollection` and `getEntry` from `astro:content`.
- They provide helper methods like `getPublished()`, sorting, and mapping.
- **Export**: A central `data` object aggregates all services.
**Example (`src/data/data.posts.ts`):**
```typescript
import { getCollection } from "astro:content";
class Posts {
// Private mapper for transforming raw entries
#map = (post) => {
return { ...post, derivedProp: '...' };
}
public getPublished = async () => {
const collection = await getCollection('posts');
return collection
.map(this.#map)
.sort((a, b) => b.data.pubDate - a.data.pubDate);
}
}
export const posts = new Posts();
```
**Usage:**
```typescript
import { data } from "~/data/data";
const posts = await data.posts.getPublished();
```
### Content Collections (`src/content/`)
- Defined in `src/content.config.ts`.
- Uses `zod` for schema validation.
- Loaders: Uses `glob` loader.
- Current Collections: `posts`, `experiences`, `skills`.
## 5. Environment & Configuration
- **Package Manager**: `pnpm`
- **Config**: `astro.config.ts` handles integrations (MDX, Sitemap, Icon, etc.).
- **TS Config**: `tsconfig.json` extends `astro/tsconfigs/strict`.
## 6. Dependencies
- **Core**: `astro`, `@astrojs/mdx`, `astro-icon`.
- **Styling**: Standard CSS (scoped), `less` is in devDependencies.
- **Assets**: `@fontsource/vt323` (font), `sharp` (image processing).

View File

@@ -1,47 +0,0 @@
# Astro Starter Kit: Minimal
```sh
npm create astro@latest -- --template minimal
```
[![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/github/withastro/astro/tree/latest/examples/minimal)
[![Open with CodeSandbox](https://assets.codesandbox.io/github/button-edit-lime.svg)](https://codesandbox.io/p/sandbox/github/withastro/astro/tree/latest/examples/minimal)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/withastro/astro?devcontainer_path=.devcontainer/minimal/devcontainer.json)
> 🧑‍🚀 **Seasoned astronaut?** Delete this file. Have fun!
## 🚀 Project Structure
Inside of your Astro project, you'll see the following folders and files:
```text
/
├── public/
├── src/
│ └── pages/
│ └── index.astro
└── package.json
```
Astro looks for `.astro` or `.md` files in the `src/pages/` directory. Each page is exposed as a route based on its file name.
There's nothing special about `src/components/`, but that's where we like to put any Astro/React/Vue/Svelte/Preact components.
Any static assets, like images, can be placed in the `public/` directory.
## 🧞 Commands
All commands are run from the root of the project, from a terminal:
| Command | Action |
| :------------------------ | :----------------------------------------------- |
| `npm install` | Installs dependencies |
| `npm run dev` | Starts local dev server at `localhost:4321` |
| `npm run build` | Build your production site to `./dist/` |
| `npm run preview` | Preview your build locally, before deploying |
| `npm run astro ...` | Run CLI commands like `astro add`, `astro check` |
| `npm run astro -- --help` | Get help using the Astro CLI |
## 👀 Want to learn more?
Feel free to check [our documentation](https://docs.astro.build) or jump into our [Discord server](https://astro.build/chat).

View File

@@ -21,10 +21,22 @@ const getSiteInfo = () => {
export default defineConfig({
...getSiteInfo(),
output: 'static',
integrations: [mdx(), sitemap(), icon(), compress(), robotsTxt()],
server: {
allowedHosts: true,
},
integrations: [mdx(), sitemap(), icon(), compress({
HTML: false,
}), robotsTxt()],
devToolbar: {
enabled: false,
},
build: {
inlineStylesheets: 'always',
},
vite: {
build: {
assetsInlineLimit: 1024 * 10
}
}
})

View File

@@ -1,17 +0,0 @@
{
"$schema": "https://raw.githubusercontent.com/jetify-com/devbox/0.10.5/.schema/devbox.schema.json",
"packages": [
"nodejs@21"
],
"env": {
"DEVBOX_COREPACK_ENABLED": "true"
},
"shell": {
"init_hook": [],
"scripts": {
"test": [
"echo \"Error: no test specified\" && exit 1"
]
}
}
}

View File

@@ -1,70 +0,0 @@
{
"lockfile_version": "1",
"packages": {
"nodejs@21": {
"last_modified": "2024-03-22T07:26:23-04:00",
"plugin_version": "0.0.2",
"resolved": "github:NixOS/nixpkgs/a3ed7406349a9335cb4c2a71369b697cecd9d351#nodejs_21",
"source": "devbox-search",
"version": "21.7.1",
"systems": {
"aarch64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/x1d9im8iy3q74jx1ij2k3pjsfgvqihn1-nodejs-21.7.1",
"default": true
},
{
"name": "libv8",
"path": "/nix/store/q6nyy20l3ixkc6j20sng8vfdjbx3fx3l-nodejs-21.7.1-libv8"
}
],
"store_path": "/nix/store/x1d9im8iy3q74jx1ij2k3pjsfgvqihn1-nodejs-21.7.1"
},
"aarch64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/b7cq5jvw90bl4ls3nssrj5xwh3d6vldf-nodejs-21.7.1",
"default": true
},
{
"name": "libv8",
"path": "/nix/store/w6z9pd67lv405dv366zbfn7cyvf8r43z-nodejs-21.7.1-libv8"
}
],
"store_path": "/nix/store/b7cq5jvw90bl4ls3nssrj5xwh3d6vldf-nodejs-21.7.1"
},
"x86_64-darwin": {
"outputs": [
{
"name": "out",
"path": "/nix/store/klvzykcgrhlbpkgdaw5329w2l09wp4vd-nodejs-21.7.1",
"default": true
},
{
"name": "libv8",
"path": "/nix/store/3nxs6fcwrbllix8zwdgpw52wg022h4mm-nodejs-21.7.1-libv8"
}
],
"store_path": "/nix/store/klvzykcgrhlbpkgdaw5329w2l09wp4vd-nodejs-21.7.1"
},
"x86_64-linux": {
"outputs": [
{
"name": "out",
"path": "/nix/store/7fd8ac3wm7gq8k5qd6l15hqx13bm4mr6-nodejs-21.7.1",
"default": true
},
{
"name": "libv8",
"path": "/nix/store/pxmz45ki1zrp6lzfypcjyhgk6v9mpk55-nodejs-21.7.1-libv8"
}
],
"store_path": "/nix/store/7fd8ac3wm7gq8k5qd6l15hqx13bm4mr6-nodejs-21.7.1"
}
}
}
}
}

View File

@@ -1,11 +0,0 @@
version: '3'
services:
dev:
build:
context: ./docker
working_dir: /app
volumes:
- ./:/app
ports:
- 4321:4321
command: [ pnpm, dev, '--host' ]

View File

@@ -1,3 +0,0 @@
FROM node:20-alpine
RUN corepack enable
USER 1000

1
notes.md Normal file
View File

@@ -0,0 +1 @@
https://www.arisacoba.com/

View File

@@ -1,60 +1,28 @@
{
"name": "morten-olsen-github-io",
"name": "private-webpage",
"type": "module",
"version": "0.0.1",
"scripts": {
"docker:install": "docker-compose -f docker-compose.dev.yml run --rm dev pnpm install",
"docker:dev": "docker-compose -f docker-compose.dev.yml up",
"dev": "astro dev",
"start": "astro dev",
"lint": "prettier \"**/*.{js,jsx,ts,tsx,md,mdx,astro}\" && eslint \"src/**/*.{js,ts,jsx,tsx,astro}\"",
"lint:apply": "prettier --write \"**/*.{js,jsx,ts,tsx,md,mdx,astro}\" && eslint --fix \"src/**/*.{js,ts,jsx,tsx,astro}\"",
"build": "astro check && astro build",
"preview": "astro preview",
"serve": "SITE_URL=http://localhost:3000 pnpm build && serve dist",
"astro": "astro",
"prepare": "husky"
},
"lint-staged": {
"*": "pnpm lint:apply"
"build": "astro build",
"preview": "astro build && astro preview",
"astro": "astro"
},
"dependencies": {
"@playform/compress": "^0.0.3",
"astro": "^5.3.0",
"date-fns": "^3.6.0",
"typescript": "^5.4.2"
},
"devDependencies": {
"@astrojs/check": "^0.9.4",
"@astrojs/mdx": "^4.0.8",
"@astrojs/rss": "^4.0.11",
"@astrojs/sitemap": "^3.2.1",
"@eslint/js": "^8.57.0",
"@iconify-json/mdi": "^1.1.64",
"@img/sharp-wasm32": "^0.33.3",
"@types/jsonld": "^1.5.13",
"@types/node": "^20.12.2",
"@typescript-eslint/parser": "^7.5.0",
"astro-capo": "^0.0.1",
"astro-compress": "^2.2.16",
"astro-icon": "^1.1.0",
"@astrojs/mdx": "^4.3.12",
"@astrojs/rss": "^4.0.14",
"@astrojs/sitemap": "^3.6.0",
"@fontsource/vt323": "^5.2.7",
"@playform/compress": "^0.2.0",
"astro": "^5.16.3",
"astro-icon": "^1.1.5",
"astro-robots-txt": "^1.0.0",
"eslint": "^8.57.0",
"eslint-plugin-astro": "^0.33.1",
"eslint-plugin-jsx-a11y": "^6.8.0",
"husky": "^9.0.11",
"json-schema-to-typescript": "^13.1.2",
"less": "^4.2.0",
"lint-staged": "^15.2.2",
"prettier": "^3.2.5",
"prettier-config-standard": "^7.0.0",
"prettier-plugin-astro": "^0.13.0",
"sass": "^1.72.0",
"serve": "^14.2.1",
"sharp": "^0.33.3",
"sharp-ico": "^0.1.5",
"tsx": "^4.7.1",
"vite-plugin-pwa": "^0.19.7"
"canvas": "^3.2.0",
"sharp": "^0.34.5"
},
"packageManager": "pnpm@10.3.0+sha512.ee592eda8815a8a293c206bb0917c4bb0ff274c50def7cbc17be05ec641fc2d1b02490ce660061356bd0d126a4d7eb2ec8830e6959fb8a447571c631d5a2442d"
"packageManager": "pnpm@10.27.0+sha512.72d699da16b1179c14ba9e64dc71c9a40988cbdc65c264cb0e489db7de917f20dcf4d64d8723625f2969ba52d4b7e2a1170682d9ac2a5dcaeaab732b7e16f04a",
"devDependencies": {
"less": "^4.4.2",
"vite-plugin-pwa": "^1.2.0"
}
}

10514
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

3
pnpm-workspace.yaml Normal file
View File

@@ -0,0 +1,3 @@
onlyBuiltDependencies:
- canvas
- sharp

View File

@@ -1,18 +0,0 @@
import { fileURLToPath } from 'url'
import { resolve, dirname } from 'path'
import { writeFile, mkdir } from 'fs/promises'
import { compile } from 'json-schema-to-typescript'
const root = fileURLToPath(new URL('..', import.meta.url))
const response = await fetch(
'https://raw.githubusercontent.com/jsonresume/resume-schema/master/schema.json'
)
const schema = await response.json()
const types = await compile(schema, 'ResumeSchema', { bannerComment: '' })
const location = resolve(root, 'src/types/resume-schema.ts')
console.log(`Writing to ${location}`)
await mkdir(dirname(location), { recursive: true })
await writeFile(location, types)

View File

@@ -1,5 +1,5 @@
import { getImage } from 'astro:assets'
import { data } from '@/data/data.js'
import { data } from '~/data/data'
const imageSizes = [16, 32, 48, 64, 96, 128, 256, 512]

View File

@@ -1,30 +0,0 @@
---
import { data } from '@/data/data'
const { basics } = data.profile
---
<nav>
<a href='/'>{basics.name}</a>
<div>{basics.tagline}</div>
</nav>
<style lang='less'>
nav {
margin: 0 auto;
width: 100%;
max-width: var(--content-width);
text-align: center;
padding: var(--space-lg) var(--space-lg);
}
a {
font-size: var(--font-xl);
}
div {
font-size: var(--font-md);
color: var(--color-text-light);
font-weight: 300;
}
</style>

View File

@@ -0,0 +1,96 @@
---
import { Picture } from "astro:assets";
import { data } from "~/data/data";
const currentPath = Astro.url.pathname;
const { Content, ...profile } = data.profile;
const currentExperience = await data.experiences.getCurrent()
const links = {
'/': 'Posts',
'/about/': 'About',
}
---
<header class="header">
<a class="image" href="/">
<Picture
class="picture"
alt='Profile Picture'
src={profile.image}
fetchpriority="high"
formats={['avif', 'webp', 'jpeg']}
width={120}
/>
</a>
<a class="info" href="/">
<div class="name">{profile.name}</div>
{currentExperience && (
<div class="work">
{currentExperience.data.position.name} @ {currentExperience.data.company.name}
</div>
)}
</a>
<div class="links">
{Object.entries(links).map(([target, name]) => (
<a class={currentPath === target ? 'link active' : 'link'} href={target}>{name}</a>
))}
</div>
</header>
<style>
.header {
max-width: var(--content-width);
margin: 80px auto 0 auto;
padding: 30px;
display: grid;
gap: var(--gap);
align-items: center;
grid-template-columns: auto auto 1fr auto;
grid-template-rows: auto;
grid-template-areas:
"image info . links";
}
img {
width: 50px;
height: 50px;
border-radius: 50%;
}
.image {
grid-area: image;
}
.info {
grid-area: info;
display: flex;
flex-direction: column;
.name {
font-weight: var(--fw-md);
color: var(--t-fg);
text-decoration: none;
}
.work {
font-size: var(--fs-sm);
color: var(--t-fg);
text-decoration: none;
}
}
.links {
grid-area: links;
display: flex;
gap: var(--gap);
.link {
padding: 7px 12px;
border-radius: var(--radius);
&.active {
background: var(--c-bg-em);
}
}
}
</style>

View File

@@ -0,0 +1,179 @@
---
import { icons } from '~/assets/images/images.icons';
import Header from './Header.astro';
type Props = {
title: string;
description: string
jsonLd?: unknown;
image?: string;
themeColor?: string;
}
const { title, description, jsonLd, themeColor, image }= Astro.props;
const schema = JSON.stringify(jsonLd)
---
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv='X-UA-Compatible' content='IE=edge' />
<meta name='HandheldFriendly' content='True' />
<meta name="viewport" content="width=device-width" />
<meta name="generator" content={Astro.generator} />
<link rel='sitemap' href='/sitemap-index.xml' />
<link rel='manifest' href='/manifest.webmanifest' />
<script is:inline defer src="https://umami.olsen.cloud/script.js" data-website-id="3284fd7a-6452-4048-8c8c-19740171b793" />
{themeColor && <meta name='theme-color' content={themeColor} />}
<link
rel='alternate'
type='application/rss+xml'
title='RSS Feed'
href='/articles/rss.xml'
/>
<meta name='description' content={description} />
{image && <meta property='og:image' content={image} />}
{
jsonLd && (
<script type='application/ld+json' is:inline set:html={schema} />
)
}
{
icons.pngs.map((icon) => (
<link rel='icon' href={icon.src} type='image/png' sizes={icon.size} />
))
}
<title>{title}</title>
</head>
<body>
<Header />
<slot />
</body>
</html>
<script is:inline>
const withFadeIn = document.querySelectorAll('[data-fadein]');
withFadeIn.forEach((node) => {
const observer = new IntersectionObserver((entries) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
node.classList.remove('hidden');
node.setAttribute('data-shown', 'true')
} else if (!node.hasAttribute('data-shown')) {
node.classList.add('hidden');
}
})
})
observer.observe(node);
})
</script>
<style is:global>
@view-transition {
navigation: auto; /* enabled! */
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
text-indent: 0;
list-style-type: none;
}
html,
body {
width: 100%;
height: 100%;
}
:root {
--c-bg-em: rgb(241, 242, 246);
--c-line: #d3d3d3;
--content-width: 800px;
--gap: 16px;
--system-ui: system-ui, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";
--t-fg: #222;
--bg: #fff;
--fw-df: 400;
--fw-md: 600;
--fs-sm: 14px;
--fs-md: 16px;
--radius: 10px;
--font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto,
Oxygen-Sans, Ubuntu, Cantarell, 'Helvetica Neue', sans-serif;
--font-family-heading: var(--font-family);
--font-size: 15px;
--letter-spacing: 0.5px;
--color: #000;
--background-color: #f1f5f9;
--content-width: 1000px;
--space-xs: 0.25rem;
--space-sm: 0.5rem;
--space-md: 1rem;
--space-lg: 2rem;
--space-xl: 3rem;
--space-xxl: 4rem;
--font-xxl: 2rem;
--font-xl: 1.5rem;
--font-lg: 1.1rem;
--font-sm: 0.875rem;
--font-xs: 0.75rem;
--color-text-light: rgb(75, 85, 99);
--color-border: #ddd;
--color-link: #007bff;
--radius-sm: 0.25rem;
}
html {
background-color: var(--bg);
}
body {
font-family: var(--system-ui);
font-size: var(--fs-md);
font-weight: var(--fw-df);
color: var(--t-fg);
}
h1, h2, h3, h4, h5, h6 {
font-weight: inherit;
font-size: inherit;
}
a {
color: var(--t-fg);
text-decoration: none;
}
p a {
color: var(--t-fg);
text-decoration: underline;
text-decoration-color: var(--c-line);
text-underline-offset: .35em;
}
p {
line-height: 1.6;
}
p {
margin-bottom: var(--gap);
}
[data-fadein] {
transition: all 1s;
}
.hidden {
opacity: 0;
}
</style>

52
src/content.config.ts Normal file
View File

@@ -0,0 +1,52 @@
import { defineCollection, z } from 'astro:content';
import { glob } from 'astro/loaders';
const posts = defineCollection({
loader: glob({ pattern: "**/index.mdx", base: "./src/content/posts" }),
schema: ({ image }) => z.object({
slug: z.string(),
title: z.string(),
subtitle: z.string().optional(),
description: z.string(),
color: z.string(),
pubDate: z.coerce.date(),
updatedDate: z.coerce.date().optional(),
tags: z.array(z.string()).optional(),
heroImage: image(),
})
});
const experiences = defineCollection({
loader: glob({ pattern: "**/*.mdx", base: "./src/content/experiences" }),
schema: ({ image }) => z.object({
slug: z.string(),
company: z.object({
name: z.string(),
url: z.string().url().optional(),
}),
position: z.object({
name: z.string(),
team: z.string().optional(),
}),
summary: z.string().optional(),
startDate: z.coerce.date(),
endDate: z.coerce.date().optional(),
logo: image().optional(),
banner: image().optional(),
stack: z.array(z.string()).optional(),
})
});
const skills = defineCollection({
loader: glob({ pattern: "**/*.mdx", base: "./src/content/skills" }),
schema: z.object({
slug: z.string(),
name: z.string(),
technologies: z.array(z.string()),
})
});
const collections = { posts, experiences, skills };
export { collections };

View File

@@ -1,45 +0,0 @@
---
title: A defense for the coding challenge
description: none
pubDate: 2022-04-15
heroImage: ./assets/cover.png
color: '#3d91ef'
slug: a-defense-for-the-coding-challenge
---
Let's talk about code challenges. Code challenges are a topic with many opinions and something I have been unsure if I liked or hated. Still, I would like to make a case for why I think there are situations where this practise is beneficial, not only for the interviewer but the candidate as well.
But before getting that far, I would like to point out some of the downsides to code challenges because it isn't one-size-fits-all, and you may want to steer completely clear of them or only use them in specific circumstances.
## Downside 1
The primary issue with coding challenges is that they may be built in a way that prevents the candidate from showing their strength. I have, for instance, often seen those logic-style code challenges being applied to all development positions, so a front-end developer would be quizzed on his ability to solve sorting algorithms. What he would be supposed to do after being hired was to align stuff correctly with CSS. This skill test, which ultimately assesses an entirely different set of skills than what is needed, will alienate the candidate and allow a candidate with skills in the quizzed topic to overshine one with the basic skills required.
Later I will talk a bit about some requirements that I think need to be considered in a good code test, so if used, at least would give a better indication of a candidate's skill concerning the specific role, not just as a "guy who does computer stuff".
## Downside 2
The second one is one I have mentioned before, but in a competitive hiring market, being the company with the most prolonged hiring process means that you might very well miss out on some of the best candidates due to them not having the available time to complete these tasks in their spare time, or because another company was able to close the hire quicker.
# Why you may want to use code challenges
Unfortunately, many people don't perform well in interviews. Without a technical assessment, the only place for a candidate to showcase their skills is in the interview itself.
The IT space has historically been associated with an introvert stereotype. While not always the case, they are definitely out there, and there is nothing wrong with that, but they are usually not the strongest at selling themself, and that is basically what most job interviews are. So if we give a candidate only the ability to showcase their skills through an interview, it stands to reason that the guy we end out hiring isn't necessarily the strongest candidate for the job but the best at showcasing hers/his skill.
Using a code challenge alongside the interview allows you to use the interview part to assess the person, get an idea about how they would interact on the team, have time to explain to them what the job would be like, without having the "hidden" agenda, of trying to trip them up with random technical questions, to try to see if they can answer correctly on the spot.
So instead of the on the spot question style, the candidate would get the time to seek information and solve the tasks more reminiscent of how they would work in the real world.
Additionally, if done right, the code challenge can also help the company/team prepare for the new candidate after the hire. For example, suppose your code challenge can indicate the candidate's strengths, weaknesses and knowledge level with various technologies. This can help put the "training"-program together to support the new hire to be up and running and comfortable in the position as quickly as possible.
## What makes a good code challenge
It isn't easy to answer, as it would vary from position to position, team to team, and company to company. Some jobs may require a specific knowledge set, where the "implement a sorting algorithm" may be the proper test and be something you would expect any candidate to be able to.
But here are a few questions I would use to evaluate the value of a code challenge:
1. Does it cover all the areas you are interested in in a candidate? This is not to evaluate if the candidate has ALL skills but rather to see if he has some skills which would add value to a team. For instance, if the role is for a front-end team that does both the front-end development, back-end for front-end, QA, DevOps, etc., the test should allow a candidate to showcase skills. If, for instance, your test is too heavily focused on one aspect, let's say front-end development, you may miss a candidate that could have elevated the entire team's ability at QA.
1. Does it allow for flexible timeframes? Some candidates may not have time to spend 20 hours completing your code challenge, and the test should respect that. So if you have a lot of different tasks, as in the example above, you shouldn't expect the candidate to complete all, even if he has the time. Instead, make a suggested time frame, and give the candidate the possibility of picking particular focus areas to complete. That way, you respect their time, and you also allow them to showcase the skills they feel they are strongest at.
Another bonus thing to add is to give the candidate the ability to submit additional considerations and caveats to their solution. For example, a candidate may have chosen a particular path because the "right" approach wasn't clear from the context, have made suboptimal solutions to keep within the timeframe, or even skipped parts because of scope but still want to elaborate. This way, you get closer to the complete picture, not just the code-to-repo.

View File

@@ -1,66 +0,0 @@
---
title: A meta talk about Git strategies
pubDate: 2022-12-05
color: '#ff9922'
heroImage: ./assets/cover.png
description: 'Can Git can be your trusted "expected state" for deployments?'
slug: a-meta-talk-about-git-strategies
---
Let me start with a (semi) fictional story; It is Friday, and you and your team have spent the last five weeks working on this excellent new feature. You have written a bunch of unit tests to ensure that you maintain your project's impressive 100% test coverage, and both you, your product owner and the QA testers have all verified that everything is tip-top and ready to go for the launch! You hit the big "Deploy" button. 3-2-1 Success! it is released to production, and everyone gets their glass of Champagne!
You go home for the weekend satisfied with the great job you did.
On Monday, you open your email to find it flooded with customers screaming that nothing is working! Oh no, you must have made a mistake!!! So you set about debugging and quickly locate the error message in your monitoring, so you checkout the code from Git and start investigating. But the error that happens isn't even possible. So you spend the entire day debugging, again and again, coming to the same conclusion; This is not possible.
So finally, you decide to go and read the deployment log line-by-painstakingly-line, and there, on line 13.318, you see it! One of your 12 microservices failed deployment! The deployment used a script with a pipe in it. Unfortunately, the script did not have pipefail configured. The script, therefore, did not generate a non-zero exit code, so the deployment just kept humming along, deploying the remaining 11 with success. This chain of events resulted in a broken infrastructure state and unhappy customers, and you spend the entire Monday debugging and potentially the ENTIRE EXISTANCE coming to an end!
I think most developers would have a story similar to the one above, so why is getting release management right so damn hard? Modern software architecture and the tools that help us are complex machineries, which goes for our deployment tools. Therefore ensuring that every little thing is as planned means that we would have to check hundreds, if not thousands of items, each more to decipher than the last (anyone who has ever tried to solve a broken Xcode builds from an output log will know this).
So is there a better way? Unfortunately, when things break, any of those thousands of items could be the reason, so when stuff does break, the answer is most likely no, but what about just answering the simple question: "Is something broken?". Well, I am glad you asked because I do believe that there is a better way, and it is a way that revolves around Git.
# Declaring your expected state
So I am going to talk about Kubernetes, yet again - A technology I use less and less but, for some reason, ends up being part of my examples more and more often.
At its core Kubernetes has two conceptually simple tasks; it stores an expected state of the resources that it is supposed to keep track of two; if any of those resources are, in fact, not in the expected state, it tries to right the wrong.
This approach means that when we interact with Kubernetes, we don't ask it to perform a specific task - We never tell it, "create three additional instances of service X," but rather ", There should be five instances of service X".
This approach also means that instead of actions and events, we can use reconciliation - no tracking of what was and what is, just what we expect; the rest is the tool's responsibility.
It also makes it very easy for Kubernetes to track the health of the infrastructure - it knows the expected state. If the actual state differs, it is in some unhealthy state, and if it is unhealthy, it should either fix it or, failing that, raise the alarm for manual intervention.
# Git as the expected state
So how does this relate to Git? Well, Git is a version control system. As such, it should keep track of the state of the code. That, to me, doesn't just include when and why but also where - to elaborate: Git is already great at telling when something happened and also why (provided that you write good commit messages), but it should also be able to answer what is the code state in a given context.
So let's say you have a production environment; a good Git strategy, in my opinion, should be able to answer the question, "What is the expected code state on production right now?" And note the word "expected" here; it is crucial because Git is, of course, not able to do deployments or sync environments (in most cases) but what it can do is serve as our expected state that I talked about with Kubernetes.
The target is to be able to compare what we expect, with what is actually there completly independant of all the tooling that sits in between, as we want to remove those as a source of error or complexity.
We want to have something with the simplicity of the Kubernetes approach - we declare an expected state, and the tooling enforces this or alerts us if it can not.
We also need to ensure that we can compare our expected state to the actual state.
To achieve this we are going to focus on Git SHAs, so we will be tracking if a deployed resource is a deployment of our expected SHA.
For a web resource, an excellent way to do this could be through a `/.well-known/deployment-meta.json` while if you are running something like Terraform and AWS, you could tag your resources with this SHA - Try to have as few different methods of exposing this information as possible to keep monitoring simple.
With this piece of information, we are ready to create our monitor. Let's say we have a Git ref called `environments/production`, and its HEAD points to what we expect to be in production, now comparing is simply getting the SHA of the HEAD commit of that ref and comparing it to our `./well-known/deployment-meta.json`. If they match, the environment is in the expected state. If not, it is unhealthy.
Let's extend on this a bit; we can add a scheduled task that checks the monitor. If it is unhealthy, it retriggers a deployment and, if that fails, raises the alarm - So even if a deployment failed and no one noticed it yet, it will get auto-corrected the next time our simple reconciler runs. This can be done simply using something like a GitHub workflow.
You could also go all in and write a crossplane controller and use the actual Kubernetes reconciler to ensure your environments are in a healty state - Go as creazy as you like, just remember to make the tool work for you, not the other way around.
So, now we have a setup where Git tracks the expected state, and we can easily compare the expected state and the actual state. Lastly, we have a reconciliation loop that tries to rectify any discrepancy.
# Conclusion
So as a developer, the only thing I need to keep track of is that my Git refs are pointing to the right stuff. Everything else is reconciliation that I don't have to worry about - unless it is unreconcilable - and in which case, I will get alerted.
As someone responsible for the infrastructure, the only thing I need to keep track of is that the expected state matches the actual state.
No more multi-tool lookup, complex log dives or timeline reconstruction (until something fails, of course)
I believe that the switch from Git being just the code to being the code state makes a lot of daily tasks more straightforward and more transparent, builds a more resilient infrastructure and is worth considering when deciding how you want to do Git.

View File

@@ -1,86 +0,0 @@
---
title: My day is being planned by an algorithm
pubDate: 2022-05-06
description: ''
color: '#e7d9ac'
heroImage: ./assets/cover.png
slug: bob-the-algorithm
---
import { Image } from 'astro:assets'
import TaskBounds from './assets/TaskBounds.png'
import Frame1 from './assets/Frame1.png'
import Graph1 from './assets/Graph1.png'
import Graph2 from './assets/Graph2.png'
Allow me to introduce Bob. Bob is an algorithm, and he has just accepted a role as my assistant.
I am not very good when it comes to planning my day, and the many apps out there that promise to help haven't solved the problem for me, usually due to three significant shortcomings:
1. Most day planner apps do what their paper counterparts would do: record the plan you create. I don't want to make the plan; someone should do that for me.
2. They help you create a plan at the start of the day that you have to follow throughout the day. My days aren't that static, so my schedule needs to change throughout the day.
3. They can't handle transits between locations very well.
So to solve those issues, I decided that the piece of silicon in my pocket, capable of doing a million calculations a second, should be able to help me do something other than waste time doom scrolling. It should let me get more done throughout the day and help me get more time for stuff I want to do. That is why I created Bob.
Also, I wanted a planning algorithm that was not only for productivity. I did not want to get into the same situation as poor Kiki in the book "The circle", who gets driven insane by a planning algorithm that tries to hyper-optimize her day. Bob also needs to plan downtime.
Bob is still pretty young and still learning new things, but he has gotten to the point where I believe he is good enough to start to use on a day to day basis.
<Image src={Frame1} alt='Frame1' />
How does Bob work? Bob gets a list of tasks, some from my calendar (both my work and my personal calendar), some from "routines" (which are daily tasks that I want to do most days, such as eating breakfast or picking up the kid), and some tasks come from "goals" which are a list of completable items. These tasks go into Bob, and he tries to create a plan for the next couple of days where I get everything done that I set out to do.
Tasks have a bit more data than your standard calendar events to allow for good scheduling
An "earliest start time" and a "latest start time". These define when the task can add it to the schedule.
- A list of locations where the task can be completed.
- A duration.
- If the task is required.
- A priority
<Image src={TaskBounds} alt='Task bounds' />
Bob uses a graph walk to create the optimal plan, where each node contains a few different things
- A list of remaining tasks
- A list of tasks that are impossible to complete in the current plan
- A score
- The current location
- The present time
Bob starts by figuring out which locations I can go to complete the remaining tasks and then create new leaf notes for all of those transits. Next, he figures out if some of the remaining tasks become impossible to complete and when I will arrive at the location and calculate a score for that node.
He then gets a list of all the remaining tasks for the current node which can be completed at the current location, again figuring out when I would be done with the task, updating the list of impossible tasks and scoring the node.
If any node adds a required task to the impossible list, that node is considered dead, and Bob will not analyze it further.
<Image src={Graph1} alt='Graph1' />
Now we have a list of active leaves, and from that list, we find the node with the highest score and redo the process from above.
<Image src={Graph2} alt='Graph2' />
Bob has four different strategies for finding a plan.
- First valid: this finds the first plan that satisfies all restrains but may lead to non-required tasks getting removed, even though it would be possible to find a plan that included all tasks. This strategy is the fastest and least precise strategy.
- First complete: this does the same as "First valid" but only exits early if it finds a plan that includes all tasks. This strategy will generally create pretty good plans but can contain excess transits. If it does not find any plans that contain all tasks, it will switch to the "All valid" strategy.
- All valid: this explores all paths until the path is either dead or completed. Then it finds the plan with the highest score. If there are no valid plans, it will switch to the "All" strategy.
- All: This explores all paths, even dead ones, and at the end returns the one with the highest score. This strategy allows a plan to be created even if it needs to remove some required tasks.
Scoring is quite simple at the moment, but something I plan to expand on a lot. Currently, the score gets increased when a task gets completed, and it gets decreased when a task becomes impossible. How much it is increased or decreased is influenced by the task's priority and if the task is required. It also decreases based on minutes spent transiting.
The leaf picked for analysis is the one with the highest score. This approach allows the two first strategies to create decent results, though they aren't guaranteed to be the best. It all comes down to how well tuned the scoring variables are tweaked. Currently, they aren't, but at some point, I plan to create a training algorithm for Bob, which will create plans, score them through "All", and then try to tweak the variables to arrive at the correct one with as few nodes analyzed as possible when running the same plan through "First valid"/"First complete".
This approach also allows me to calculate a plan with any start time, so I can re-plan it later in the day if I can't follow the original plan or if stuff gets added or removed. So this becomes a tool that helps me get the most out of my day without dictating it.
Bob can also do multi-day planning. Here, he gets a list of tasks for the different days as he usually would and a "shared" list of goals. So he runs the same calculation, adding in the tasks for that day, along with the shared goal list, and everything remaining from the shared list then gets carried over to the next day. This process repeats for all the remaining days.
I have created a proof of concept app that houses Bob. I can manage tasks, generate plans, and update my calendar with those plans in this app.
There are also a few features that I want to add later. The most important one is an "asset" system. For instance, when calculating transits, it needs to know if I have brought the bike along because if I took public transit to work, it doesn't make sense to calculate a bike transit later in the day. This system would work by "assets" being tied to a task and location, and then when Bob creates plans, he knows to consider if the asset is there or not. Assets could also be tied to tasks, so one task may be to pick up something, another to drop it off. In those cases, assets would act as dependencies, so I have to have picked up the asset before being able to drop it off. The system is pretty simple to implement but causes the graph to grow a lot, so I need to do some optimizations before it makes sense to put it in.
Wrapping up; I have only been using Bob for a few days, but so far, he seems to create good plans and has helped me achieve more both productive tasks, also scheduling downtime such as reading, meditation, playing console etc. and ensuring that I had time for that in the plan.
There is still a lot of stuff that needs to be done, and I will add in features and fix the code base slowly over time.
You can find the source for this algorithm and the app it lives in at [Github](https://github.com/morten-olsen/bob-the-algorithm), but beware, it is a proof of concept, so readability or maintainability hasn't been a goal.

View File

@@ -1,37 +0,0 @@
---
title: How to hire engineers, by an engineer
description: ''
pubDate: 2022-03-16
color: '#8bae8c'
heroImage: ./assets/cover.png
slug: how-to-hire-engineers-by-an-engineer
---
It has been a few years since I have been part of the recruitment process. Still, I did reasonably go through the hiring process when looking for a new job so that I will mix a bit from both sides for this article, so you get both some experience from hires and what worked and experience from the other side of the table and what caused my not to consider a company, because spoiler alert: Engineers are contacted a lot!
So first I need to introduce a hard truth as this will be underpining a lot of my points and is most likely the most important take away from this: Your company is not unique
Unless your tech brand is among the X highest regarded in the world, your company alone isn't a selling point. I have been contacted by so many companies which thought because they were leader in their field or had a "great product" that makes candidates come banging at their door. If I could disclose all those messages it would be really easy to see that except for the order of information all says almost the same thing, and chances are you job listing is the same. Sorry.
The take away from this is that if everything is equal any misstep in your hiring process can cost you that candidate, so if you are not amongst the strongest of tech brands you need to be extremely aware or you will NOT fill the position
Okay after that slap in the face we can take a second to look at something...
A lot of people focuses on skills when hirering, and of cause the candidate should have the skills for the position, but I will make a case to put less focus on the hard skills and more focus on passion.
Usually screening skills through an interview is hard and techniques like code challenges has their own issues, but more on that later.
Screening for passion is easier, usually you can get a good feeling if a candidate is passionate about a specific topic, and passionate people want to learn! So even if the candidate has limited skills, if they have passion they will learn and they will outgrow a candidate with experience but no passion.
Filling a team with technically skills can solve an immediate requirement, but companies, teams and products change, your requirements will change along with it. Building a passionate team will adjust and evolve along where a product where a team consisting of skilled people but without passion will stay where they where when you hired them.
Another issue I see in many job postings is requiring a long list of skills. It would be awesome to find someone skilled in everything and who could solve all tasks. In the real world, when ever you add another skill to that list you are limiting the list of candidates that would fit so chances are you are not going to find anyone or the actual skills of any candidate in that very narrow list will be way lower than in a wider pool.
A better way is to just add the most important skills, and learn the candidate any less important skills at the job. If you hired passionate people this should be possible (remember to screen for passion about learning new things)
While we are on the expected skill list: A lot of companies has this list of "it would be really nice if you had these skills". Well those could definitely be framed as learning experiences instead. If you have recruited passionate people, seeing that you will learn new cool skills count as a plus and any candidate who already have the skill will see it and think "awesome, I am already uniquely suited for the job!"
I promised to talk a bit about code challenges: They can be useful to screen a candidates ability to just go in and start to work from day one, and if done correctly can help a manager organise their process to best suit the teams unique skills but...
Hiring at the moment is hard! And as stated pretty much any job listing I have seen are identical, so as in a competitive job market where a small outlier on your resumé lands you in the pile never read through, as likely is it in a competitive hiring market that your listing never gets acted upon.
Engineers are contacted a lot by recruiters and speaking to all would require a lot of work so if a company has a prolonged process it quickly gets sorted out, especially by the best candidates whom most likely get contacted the most and most likely have a full time job so time is a scarce resource.
So be aware that if you use time consuming processes such as the code challenge you might miss out on the best candidates.
Please just disclose the salary range. From being connected to a few hundred recruiters here on LinkedIn I can see that this isn't just me but a general issue. As mentioned before, it takes very little to have your listings ignored and most likely most of your strongest potential candidates already has full time jobs, and would not want to move to a position paying less unless the position where absolutely unique (which again, yours most likely isn't). Therefore if you choose not to disclose the salary range be aware that you miss out on most of the best candidates. A company will get an immediate no from me if not disclosing the salary range.
Lastly, I have spend a lot of words telling your that your company or position isn't unique, and well we both know that is not accurate, your company most likely has something unique to offer! Be that soft values or hard benefits. Be sure to put them in your job listing, to bring out this uniqueness, it is what is going to set you apart from the other listing. There are lot of other companies with the same tech stack, using an agile approach, with a high degree of autonomy, with a great team... But what can you offer that no one else can? Get it front and center... Recruiting is marketing and good copy writing

View File

@@ -1,94 +0,0 @@
---
title: My Home Runs Redux
pubDate: 2022-03-15
color: '#e80ccf'
description: ''
heroImage: ./assets/cover.png
slug: my-home-runs-redux
---
import graph from './assets/graph.png'
import { Image } from 'astro:assets'
I have been playing around with smart homes for a long time; I have used most of the platforms out there, I have developed quite a few myself, and one thing I keep coming back to is Redux.
Those who know what Redux is may find this a weird choice, but for those who don't know Redux, I'll give a brief introduction to get up to speed.
Redux is a state management framework, initially built for a React talk by Dan Abramov and is still primarily associated with managing React applications. Redux has a declarative state derived through a "reducer"-function. This reducer function takes in the current state and an event, and, based on that event, it gives back an updated state. So you have an initial state inside Redux, and then you dispatch events into it, each getting the current state and updating it. That means that the resulting state will always be the same given the same set of events.
So why is a framework primarily used to keep track of application state for React-based frontends a good fit for a smart home? Well, your smart home platform most likely closely mimics this architecture already!
First, an event goes in, such as a motion sensor triggering, or you set the bathroom light to 75% brightness in the interface. This event then goes into the platform and hits some automation or routine, resulting in an update request getting sent to the correct devices, which then change the state to correspond to the new state.
...But that is not quite what happens on most platforms. Deterministic events may go into the system, but this usually doesn't cause a change to a deterministic state. Instead, it gets dispatched to the device, the devices updates, the platform sees this change, and then it updates its state to represent that new state.
This distinction is essential because it comes with a few drawbacks:
- Because the event does not change the state but sends a request to the device that does it, everything becomes asynchronous and can happen out of order. This behaviour can be seen either as an issue or a feature, but it does make integrating with it a lot harder from a technical point of view.
- The request is sent to the device as a "fire-and-forget" event. It then relies on the success of that request and the subsequent state change to be reported back from the device before the state gets updated. This behaviour means that if this request fails (something you often see with ZigBee-based devices), the device and the state don't get updated.
- Since the device is responsible for reporting the state change, you are dependent on having that actual device there to make the change. Without sending the changes to the actual device, you cannot test the setup.
So can we create a setup that gets away from these issues?
Another thing to add here is more terminology/philosophy, but most smart home setups are, in my opinion, not really smart, just connected and, to some extent, automated. I want a design that has some actual smartness to it. In this article, I will outline a setup closer to that of the connected, automated home, and at the end, I will give some thoughts on how to take this to the next level and make it smart.
We know what we want to achieve, and Redux can help us solve this. Remember that Redux takes actions and applies them in a deterministic way to produce a deterministic state.
Time to go a bit further down the React rabbit hole because another thing from React-land comes in handy here: the concept of reconciliation.
Instead of dispatching events to the devices waiting for them to update and report their state back, we can rely on reconciliation to update our device. For example, let's say we have a device state for our living room light that says it is at 80% brightness in our Redux store. So now we dispatch an event that sets it to 20% brightness.
Instead of sending this event to the device, we update the Redux state.
We have a state listener that detects when the state changes and compares it to the state of the actual device. In our case, it seems that the state indicates that the living room light should be at 20% but are, in fact, at 80%, so it sends a request to the actual device to update it to the correct value.
We can also do schedule reconciliation to compare our Redux state to that of the actual devices. If a device fails to update its state after a change, it will automatically get updated on our next scheduled run, ensuring that our smart home devices always reflect our state.
_Sidenote: Yes, of course, I have done a proof of concept using React with a home build reconciliation that reflected the virtual dom unto physical devices, just to have had a house that ran React-Redux_
Let's go through our list of issues with how most platforms handle this. We can see that we have eliminated all of them by switching to this Redux-reconciliation approach: we update the state directly to run it synchronously. We can re-run the reconciliation so failed or dropped device updates get re-run. We don't require any physical devices as our state is directly updated.
We now have a robust, reliable, state management mechanism for our smart home, time to add some smarts to it. It is a little outside the article's main focus as this is just my way of doing it; there may be way better ways, so use it at your discretion.
Redux has the concept of middlewares which are stateful functions that live between the event going into Redux and the reducer updating the state. These middlewares allow Redux to deal with side effects and do event transformations.
Time for another piece of my smart home philosophy: Most smart homes act on events, and I have used the word throughout this article, but to me, events are not the most valuable thing when creating a smart home, instead I would argue that the goal is to deal with intents rather than events. For instance, an event could be that I started to play a video on the TV. But, that state a fact, what we want to do is instead capture what I am trying to achieve, the "intent", so lets split this event into two intents; if the video is less than one hour, I want to watch a TV show, if it is more I want to watch a movie.
These intents allow us to not deal with weak-meaning events to do complex operations but instead split our concern into two separate concepts: intent classification and intent execution.
So last thing we need is a direct way of updating devices, as we can not capture everything through our intent classifier. For instance, if I sit down to read a book that does not generate any sensor data for our system to react to, I will still need a way to adjust device states manually. (I could add a button that would dispatch a reading intent)
I have separated the events going into Redux into two types:
- control events, which directly controls a device
- environment events represent sensor data coming in (push on a button, motion sensor triggering, TV playing, etc.)
Now comes the part I have feared, where I need to draw a diagram.
...sorry
<Image src={graph} alt='graph' />
So this shows our final setup.
Events go into our Redux setup, either environment or control.
Control events go straight to the reducer, and the state is updated.
Environment events first go to the intent classifier, which uses previous events, the current state, and the incoming event to derive the correct intent. The intent then goes into our intent executor, which converts the intent into a set of actual device changes, which gets sent to our reducer, and the state is then updated.
Lastly, we invoke the reconciliation to update our real devices to reflect our new state.
There we go! Now we have ended up with a self-contained setup. We can run it without the reconciliation or mock it to create tests for our setup and work without changing any real devices, and we can re-run the reconciliation on our state to ensure our state gets updated correctly, even if a device should miss an update.
**Success!!!**
But I promised to give an idea of how to take this smart home and make it actually "smart."
Let's imagine that we did not want to "program" our smart home. Instead, we wanted to use it; turning the lights on and off using the switches when we entered and exited a room, dimming the lights for movie time, and so on, and over time we want our smart home to pick up on those routines and start to do them for us.
We have a setup where we both have control events and environments coming in. Control events represent how we want the state of our home to be in a given situation. Environment events represent what happened in our home. So we could store those historically with some machine learning and look for patterns.
Let's say you always dim the light when playing a movie that is more than one hour long; your smart home would be able to recognize this pattern and automatically start to do this routine for you.
Would this work? I don't know. I am trying to get more skilled at machine learning to find out.

View File

@@ -1,62 +0,0 @@
import { defineCollection, z } from 'astro:content'
import { glob } from 'astro/loaders'
import { resolve } from 'path'
const base = import.meta.dirname
const articles = defineCollection({
loader: glob({ pattern: '*/index.mdx', base: resolve(base, 'articles') }),
schema: ({ image }) =>
z.object({
slug: z.string(),
title: z.string(),
description: z.string(),
color: z.string(),
pubDate: z.coerce.date(),
updatedDate: z.coerce.date().optional(),
tags: z.array(z.string()).optional(),
heroImage: image()
})
})
const work = defineCollection({
loader: glob({ pattern: '*.mdx', base: resolve(base, 'work') }),
schema: ({ image }) =>
z.object({
slug: z.string(),
name: z.string(),
position: z.string(),
startDate: z.coerce.date(),
endDate: z.coerce.date().optional(),
summary: z.string().optional(),
url: z.string().optional(),
logo: image().optional(),
banner: image().optional()
})
})
const references = defineCollection({
loader: glob({ pattern: '*.mdx', base: resolve(base, 'references') }),
schema: () =>
z.object({
slug: z.string(),
name: z.string(),
position: z.string(),
company: z.string(),
date: z.coerce.date(),
relation: z.string(),
profile: z.string()
})
})
const skills = defineCollection({
loader: glob({ pattern: '*.mdx', base: './src/content/skills' }),
schema: () =>
z.object({
slug: z.string(),
name: z.string(),
technologies: z.array(z.string())
})
})
export const collections = { articles, work, references, skills }

View File

@@ -1,10 +1,15 @@
---
company:
name: BilZonen
position: Web Developer
position:
name: Web Developer
startDate: 2010-06-01
endDate: 2012-02-28
summary: As a part-time web developer at bilzonen.dk, I managed both routine maintenance and major projects like new modules and integrations, introduced a custom provider-model system in .NET (C#) for data management, and established the development environment, including server setup and custom tools for building and testing.
slug: bilzonen-1
stack:
- .NET
- UmbracoCMS
---
I work as a part-time web developer on bilzonen.dk. I have worked with both day-to-day maintenance and large scale projects (new search module, integration of new data catalog, mobile site, new-car-catalog and the entire dealer solution). The page is an Umbraco solution, with all .NET (C#) code. I have introduced a new custom build provider-model system, which allows data-providers to move data between data stores, external services, and the site. (search, caching and external car date is running through the provider system). Also, i have set up the development environment, from setting up virtual server hosts to building custom tool for building and unit testing.

View File

Before

Width:  |  Height:  |  Size: 3.2 KiB

After

Width:  |  Height:  |  Size: 3.2 KiB

View File

@@ -1,11 +1,18 @@
---
company:
name: Sampension
position: Senior Frontend Developer
position:
name: Senior Frontend Developer
startDate: 2018-01-01
endDate: 2021-12-31
logo: ./assets/logo.jpeg
summary: At Sampension, a Danish pension fund, I designed and helped build a cross-platform frontend architecture using React Native and React Native for Web, ensuring a unified, maintainable codebase for native iOS, Android, and web applications across devices.
slug: sampension
stack:
- TypeScript
- React Native
- Redux
- Gatsby
---
Sampension is a danish pension fund and my work has been to design and help to build a frontend architecture that would run natively on iOS and Android as well as on the web on both desktop and mobile devices.

View File

Before

Width:  |  Height:  |  Size: 5.8 KiB

After

Width:  |  Height:  |  Size: 5.8 KiB

View File

Before

Width:  |  Height:  |  Size: 4.6 KiB

After

Width:  |  Height:  |  Size: 4.6 KiB

View File

@@ -1,12 +1,18 @@
---
company:
name: Trendsales
position: Web Developer
position:
name: Web Developer
startDate: 2012-03-01
endDate: 2012-09-30
logo: ./assets/logo.png
banner: ./assets/banner.png
summary: At Trendsales, I started with a part-time role focused on maintaining the API for the iOS app, eventually diversifying my responsibilities to include broader platform development, allocating 25-50% of my time to the API.
slug: trendales-1
stack:
- .NET MVC
- Microsoft SQL
- ASP
---
I got a part-time job at Trendsales, where my primary responsibility was maintaining the API which powered the iOS app. Quickly my tasks became more diverse, and I ended using about 25-50 percent of my time on the API, while the remaining was spend doing work on the platform in general.

View File

@@ -1,11 +1,18 @@
---
company:
name: Trendsales
position: iOS and Android Developer
position:
name: iOS and Android Developer
startDate: 2012-10-01
endDate: 2015-12-31
logo: ./trendsales-1/assets/logo.png
summary: I led the development of a new Xamarin-based iOS app from scratch at Trendsales, including a supporting API and backend work, culminating in a successful app with over 15 million screen views and 1.5 million sessions per month, and later joined a team to expand into Android development.
slug: trendsales-2
stack:
- Xamarin
- .NET WebAPI
- Microsoft SQL
- Android Java SDK
---
I became responsible for the iOS platform, which was a task that required a new app to be built from the ground up using _Xamarin_. In addition to that, a new API to support the app along with support for our larger vendors was needed which had to be build using something closely similar to _Microsoft MVC_ so that other people could join the project at a later stage.

View File

@@ -1,11 +1,17 @@
---
company:
name: Trendsales
position: Frontend Technical Lead
position:
name: Frontend Technical Lead
startDate: 2016-01-01
endDate: 2017-12-31
logo: ./trendsales-1/assets/logo.png
summary: In 2015, I spearheaded the creation of a new frontend architecture for Trendsales, leading to the development of m.trendsales.dk, using React and Redux, and devising bespoke frameworks for navigation, flexible routing, skeleton page transitions, and integrating workflows across systems like Github, Jira, Octopus Deploy, AppVeyor, and Docker.
slug: trendsales-3
stack:
- React
- PhoneGap
- Redux
---
In 2015 Trendsales decided to build an entirely new platform. It became my responsibility to create a modernized frontend architecture. The work began in 2016 with just me on the project and consisted of a proof of concept version containing everything from framework selection, structure, style guides build chain, continuous deployment, and an actual initial working version. The result where the platform which I was given technical ownership over and which I, along with two others, worked on expanding over the next year. The platform is currently powering _m.trendsales.dk_. The project is build using React and state management are done using Redux. In addition to the of the shelve frameworks, we also needed to develop quite a few bespoke frameworks, in order to meet demands. Among others, these were created to solve the following issues:

View File

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 194 KiB

View File

@@ -1,10 +1,21 @@
---
company:
name: ZeroNorth
position: Senior Software Engineer
position:
name: Senior Software Engineer
team: Vessel Reporting Team
startDate: 2022-01-01
endDate: 2023-05-01
logo: ./assets/logo.png
summary: At ZeroNorth, I develop and maintain a NextJS-based, offline-first PWA for on-vessel reporting, and enhance report processing infrastructure using Terraform and NodeJS.
slug: zeronorth-1
stack:
- TypeScript
- NextJS
- NodeJS
- Terraform
- AWS
- GitLab
---
I am currently employed at ZeroNorth, a Danish software as a service company that specializes in providing solutions to help the shipping industry decarbonize through optimization. My primary focus has been on the development and maintenance of the on-vessel reporting platform. This platform is a NextJS based PWA with offline-first capabilities, which allows for easy and efficient reporting on board ships.

View File

@@ -0,0 +1,27 @@
---
company:
name: ZeroNorth
position:
name: Senior Software Engineer
team: Voyage Optimisation
startDate: 2023-05-01
endDate: 2025-03-01
logo: ./zeronorth-1/assets/logo.png
summary: "// TODO: describe my position in the Voyage Optimisation Team"
slug: zeronorth-2
stack:
- TypeScript
- .NET
- NodeJS
- Tailwind
- React
- Redux
- RxJS
- Terraform
- AWS
- GitHub Actions
---
// TODO: describe my position in the Voyage Optimisation Team

View File

@@ -0,0 +1,19 @@
---
company:
name: ZeroNorth
position:
name: Senior Software Engineer
team: AI Team
startDate: 2025-03-01
logo: ./zeronorth-1/assets/logo.png
summary: "# TODO: describe my role in our new AI team"
slug: zeronorth-3
stack:
- Python
- LangChain
- Bedrock
- Terraform
- GitHub Actions
---
// TODO: describe my position in the Voyage Optimisation Team

View File

Before

Width:  |  Height:  |  Size: 1.4 MiB

After

Width:  |  Height:  |  Size: 1.4 MiB

View File

@@ -0,0 +1,55 @@
---
title: A defense for the coding challenge
description: none
pubDate: 2022-04-15
heroImage: ./assets/cover.png
color: '#3d91ef'
slug: a-defense-for-the-coding-challenge
---
# A Defense for the Coding Challenge
Let's talk about code challenges. It's a topic with many opinions, and for a long time, I was unsure whether I liked or hated them. However, I'd like to make a case for why I think there are situations where this practice is beneficial, not only for the interviewer but for the candidate as well.
But before getting that far, I would like to point out some of the downsides to code challenges because it isn't a one-size-fits-all solution. You may want to steer completely clear of them or only use them in specific circumstances.
---
## Downside 1
The primary issue with coding challenges is that they may be built in a way that prevents the candidate from showing their strengths. For instance, I have often seen logic-style code challenges applied to all development positions. A front-end developer, whose job would be to align things correctly with CSS, would be quizzed on their ability to solve sorting algorithms. This skill test, which ultimately assesses an entirely different set of skills than what is needed, will alienate the candidate. It allows a candidate with skills in the quizzed topic to outshine one with the basic skills required.
Later, I will talk a bit about some requirements that I think need to be considered in a good code test. If used, it should at least give a better indication of a candidate's skill concerning the specific role, not just as a "guy who does computer stuff."
---
## Downside 2
The second downside is one I have mentioned before. In a competitive hiring market, being the company with the most prolonged hiring process means you might very well miss out on some of the best candidates because they don't have the available time to complete these tasks in their spare time, or because another company was able to close the hire quicker.
---
# Why You May Want to Use Code Challenges
Unfortunately, many people don't perform well in interviews. Without a technical assessment, the only place for a candidate to showcase their skills is in the interview itself.
The IT space has historically been associated with an introverted stereotype. While not always the case, they are definitely out there, and there is nothing wrong with that. However, they are usually not the strongest at selling themselves, which is basically what most job interviews are. So if we give a candidate only the ability to showcase their skills through an interview, it stands to reason that the person we end up hiring isn't necessarily the strongest candidate for the job but the best at showcasing their skills.
Using a code challenge alongside the interview allows you to use the interview to assess the person. You can get an idea of how they would interact on the team and have time to explain what the job would be like without having the "hidden" agenda of trying to trip them up with random technical questions to see if they can answer correctly on the spot.
So instead of the on-the-spot question style, the candidate gets the time to seek information and solve the tasks, which is more reminiscent of how they would work in the real world.
Additionally, if done right, the code challenge can also help the company or team prepare for the new candidate after the hire. For example, your code challenge can indicate the candidate's strengths, weaknesses, and knowledge level with various technologies. This can help put a "training" program together to support the new hire to be up and running and comfortable in the position as quickly as possible.
---
## What Makes a Good Code Challenge
This isn't easy to answer, as it would vary from position to position, team to team, and company to company. Some jobs may require a specific knowledge set, where the "implement a sorting algorithm" may be the proper test and be something you would expect any candidate to be able to do.
But here are a few questions I would use to evaluate the value of a code challenge:
1. **Does it cover all the areas you are interested in for a candidate?** This is not to evaluate if the candidate has ALL skills but rather to see if they have skills that would add value to a team. For instance, if the role is for a front-end team that does both the front-end development, back-end for front-end, QA, DevOps, etc., the test should allow a candidate to showcase those skills. If, for instance, your test is too heavily focused on one aspect, let's say front-end development, you may miss a candidate who could have elevated the entire team's ability at QA.
2. **Does it allow for flexible timeframes?** Some candidates may not have time to spend 20 hours completing your code challenge, and the test should respect that. So if you have a lot of different tasks, as in the example above, you shouldn't expect the candidate to complete all, even if they have the time. Instead, make a suggested time frame and give the candidate the possibility of picking particular focus areas to complete. That way, you respect their time, and you also allow them to showcase the skills they feel they are strongest at.
Another bonus to add is to give the candidate the ability to submit additional considerations and caveats to their solution. For example, a candidate may have chosen a particular path because the "right" approach wasn't clear from the context, made suboptimal solutions to keep within the timeframe, or even skipped parts because of scope but still wants to elaborate. This way, you get closer to the complete picture, not just the code-to-repo.

View File

Before

Width:  |  Height:  |  Size: 1.6 MiB

After

Width:  |  Height:  |  Size: 1.6 MiB

View File

@@ -0,0 +1,72 @@
---
title: A meta talk about Git strategies
pubDate: 2022-12-05
color: '#ff9922'
heroImage: ./assets/cover.png
description: 'Can Git can be your trusted "expected state" for deployments?'
slug: a-meta-talk-about-git-strategies
---
Let me start with a (semi) fictional story: It's Friday, and you and your team have spent the last five weeks working on this excellent new feature. You've written a bunch of unit tests to ensure you maintain your project's impressive 100% test coverage, and you, your product owner, and the QA testers have all verified that everything is tip-top and ready for the launch! You hit the big "Deploy" button. 3-2-1. Success! It's released to production, and everyone gets a glass of champagne!
You go home for the weekend, satisfied with the great job you did.
On Monday, you open your email to find it flooded with customers screaming that nothing is working! Oh no, you must have made a mistake! So you set about debugging and quickly locate the error message in your monitoring. You check out the code from Git and start investigating. But the error that's happening isn't even possible. So you spend the entire day debugging, again and again, coming to the same conclusion: This is not possible.
Finally, you decide to read the deployment log, line-by-painstakingly-line, and there, on line 13,318, you see it! One of your 12 microservices failed to deploy! The deployment used a script with a pipe. Unfortunately, the script did not have `pipefail` configured. The script, therefore, did not generate a non-zero exit code, so the deployment just kept humming along, deploying the remaining 11 with success. This chain of events resulted in a broken infrastructure state and unhappy customers. You spent the entire Monday debugging, and the entire existence of your company could be coming to an end!
I think most developers would have a story similar to the one above, so why is getting release management right so damn hard? Modern software architecture and the tools that help us are complex machines, which goes for our deployment tools. Therefore, ensuring that every little thing is as planned means we would have to check hundreds, if not thousands, of items, each more difficult to decipher than the last. (Anyone who has ever tried to solve a broken Xcode build from an output log will know this.)
So is there a better way? Unfortunately, when things break, any of those thousands of items could be the reason. So when stuff does break, the answer is most likely no, but what about just answering the simple question: "Is something broken?" Well, I'm glad you asked because I do believe that there is a better way, and it's a way that revolves around Git.
---
# Declaring Your Expected State
So I am going to talk about Kubernetes, yet again—a technology I use less and less but, for some reason, ends up being part of my examples more and more often.
At its core, Kubernetes has two conceptually simple tasks: First, it stores an expected state of the resources that it's supposed to keep track of; second, if any of those resources are not in the expected state, it tries to right the wrong.
This approach means that when we interact with Kubernetes, we don't ask it to perform a specific task. We never tell it, "create three additional instances of service X," but rather, "there should be five instances of service X."
This approach also means that instead of actions and events, we can use reconciliation—no tracking of what was and what is, just what we expect; the rest is the tool's responsibility.
It also makes it very easy for Kubernetes to track the health of the infrastructure. It knows the expected state. If the actual state differs, it's in an unhealthy state, and if it's unhealthy, it should either fix it or, failing that, raise the alarm for manual intervention.
---
# Git as the Expected State
So how does this relate to Git? Well, Git is a version control system. As such, it should keep track of the state of the code. That, to me, doesn't just include when and why but also where. To elaborate: Git is already great at telling when something happened and also why (provided that you write good commit messages), but it should also be able to answer what the code state is in a given context.
So let's say you have a production environment. A good Git strategy, in my opinion, should be able to answer the question, "What is the expected code state on production right now?" And note the word "expected" here; it's crucial because Git is, of course, not able to do deployments or sync environments (in most cases), but what it can do is serve as our expected state that I talked about with Kubernetes.
The goal is to be able to compare what we expect with what is actually there, completely independent of all the tooling that sits in between, as we want to remove those as a source of error or complexity.
We want to have something with the simplicity of the Kubernetes approach—we declare an expected state, and the tooling enforces this or alerts us if it cannot.
We also need to ensure that we can compare our expected state to the actual state.
To achieve this, we are going to focus on Git SHAs, so we will be tracking if a deployed resource is a deployment of our expected SHA.
For a web resource, an excellent way to do this could be through a `/.well-known/deployment-meta.json`, while if you are running something like Terraform and AWS, you could tag your resources with this SHA. Try to have as few different methods of exposing this information as possible to keep monitoring simple.
With this piece of information, we are ready to create our monitor. Let's say we have a Git ref called `environments/production`, and its HEAD points to what we expect to be in production. Now comparing is simply getting the SHA of the HEAD commit of that ref and comparing it to our `./well-known/deployment-meta.json`. If they match, the environment is in the expected state. If not, it is unhealthy.
Let's extend on this a bit; we can add a scheduled task that checks the monitor. If it's unhealthy, it retriggers a deployment and, if that fails, raises the alarm. So even if a deployment failed and no one noticed it yet, it will get auto-corrected the next time our simple reconciler runs. This can be done simply using something like a GitHub workflow.
You could also go all in and write a Crossplane controller and use the actual Kubernetes reconciler to ensure your environments are in a healthy state. Go as crazy as you like, just remember to make the tool work for you, not the other way around.
So, now we have a setup where Git tracks the expected state, and we can easily compare the expected state and the actual state. Lastly, we have a reconciliation loop that tries to rectify any discrepancy.
---
# Conclusion
So as a developer, the only thing I need to keep track of is that my Git refs are pointing to the right stuff. Everything else is reconciliation that I don't have to worry about—unless it's unreconcilable—in which case, I will get alerted.
As someone responsible for the infrastructure, the only thing I need to keep track of is that the expected state matches the actual state.
No more multi-tool lookup, complex log dives, or timeline reconstruction (until something fails, of course).
I believe that the switch from Git being just the code to being the code state makes a lot of daily tasks more straightforward and more transparent, builds a more resilient infrastructure, and is worth considering when deciding how you want to do Git.

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.2 MiB

View File

@@ -0,0 +1,93 @@
---
title: 'A Small LLM Trick: Giving AI Assistants Long-Term Memory'
pubDate: 2026-01-10
color: '#4A90E2'
description: 'Experimenting with giving AI coding assistants a form of long-term memory.'
heroImage: ./assets/cover.png
slug: a-small-llm-trick
---
I have a confession to make: I often forget how my own projects work.
It usually happens like this: I spend a weekend building a Proof of Concept, life gets in the way for three weeks, and when I finally come back to it, Im staring at a folder structure that makes sense to "Past Morten" but is a complete mystery to "Current Morten."
Nowadays, I usually have an AI assistant helping me out. But even they struggle with what I call the "session void." They see the code, but they don't see the *intent* behind changes made three weeks ago. They learn something during a long chat session, but as soon as you start a new one, that "aha!" moment is gone.
Ive been experimenting with a new technique to solve this. Its early days, but the initial results have been promising enough that I wanted to share it. I call it the `AGENTS.md` trick.
## Beyond the System Prompt
Many developers using AI for coding are already familiar with the idea of a project-level prompt or an instructions file. But usually, these are static—you write them once, and they tell the agent how to format code or which library to prefer.
The experiment I'm running is to stop treating `AGENTS.md` as a static instruction manual and start treating it as **long-term memory**.
## The Strategy: Resemblance of Recall
The goal is to give the agent a set of instructions that forces it to continuously build and maintain an understanding of the project, across sessions and across different agents.
Ive started adding these core "memory" instructions to an `AGENTS.md` file in the root of my projects:
1. **Maintain the Source of Truth**: Every time the agent makes a significant architectural change or learns something new about the project's "hidden rules," it must update `AGENTS.md`.
2. **Externalize Discoveries**: Any time the agent spends time "exploring" a complex logic flow to understand it, it should write a short summary of that discovery into a new file in `./docs/`.
3. **Maintain the Map**: It must keep `./docs/index.md` updated with a list of these discoveries.
4. **Reference Personal Standards**: I point it to a global directory (like `~/prompts/`) where I keep my general preferences for things like JavaScript style or testing patterns. The power here is that these standards are **cross-project**. (Note: This works best if your AI tool—like Cursor or GitHub Copilot—allows you to reference or index files outside the current project root.) If you're in a team, you could host these in a shared git repository and instruct the agent to clone them if the folder is missing. This ensures the agent learns "how we build things" globally, not just "how this one project works."
## Why I'm Liking This (Especially for Existing Projects)
LLMs are great at reading what's right in front of them, but they have zero "recall" for things that happened in a different chat thread or a file they haven't opened yet.
By forcing the agent to document its own "aha!" moments, I'm essentially building a bridge between sessions.
- **Continuity**: When I start a new session, the agent reads `AGENTS.md`, sees the map in `index.md`, and suddenly has the context of someone who has been working on the project for weeks.
- **Organic Growth**: I don't have to sit down and write "The Big Manual." The documentation grows exactly where the complexity is, because that's where the agent had to spend effort understanding things.
- **Legacy Code**: This has been a lifesaver for older projects. I don't need to document the whole thing upfront. I just tell the agent: "As you figure things out, write it down."
## Evolutionary Patterns
One of the coolest side effects of this setup is how it handles evolving standards.
If I decide I want to switch from arrow functions back to standard function declarations, I don't just change my code; I tell the agent. Because the agent has instructions to maintain the memory, it can actually suggest updating my global standards.
I've instructed it that if it notices me consistently deviating from my `javascript-writing-style.md`, it should ask: *"Hey, it looks like you're moving away from arrow functions. Should I update your global pattern file to reflect this?"* This keeps my preferences as alive as the code itself.
## Early Results
Is it perfect? Not yet. Sometimes agents need a nudge to remember their documentation duties, and I'm still figuring out the best balance to keep the `./docs/` folder from getting cluttered.
But so far, it has drastically reduced the "What was I thinking?" factor for both me and the AI.
## The Hidden Benefits: Documentation Gardening
Beyond just "remembering" things for the next chat session, this pattern creates a virtuous cycle for the project's health.
- **Automated Gardening**: Periodically, you can ask an agent to go over the scattered notes in `./docs/` and reformat them into actual, structured project documentation. Since the agent has already captured the technical nuances it needed to work effectively, these docs are often more accurate and detailed than anything a human would write from scratch.
- **Context for Reviewers**: When you open a Pull Request, the documentation changes serve as excellent context for human reviewers. If youve introduced a change large enough to document, seeing the agents "memory" update alongside the code makes the *why* behind your changes much more transparent.
- **Advanced Tip: The Auto-Documenting Hook**: For the truly lazy (like me), you can set up a git hook that runs an agent after a commit. It reviews your manual changes and ensures the `./docs/` folder is updated accordingly. This means that even if you bypass the AI for a quick fix, your project's "memory" stays in sync.
## The "Memory" Template
If you want to try this out, here is the behavioral template I've been using. **Please note: this is a work in progress.** I expect to refine these instructions significantly over the coming months.
```markdown
# Agent Guidelines for this Repository
> **Important**: You are responsible for maintaining the long-term memory of this project.
## 1. Your Behavioral Rules
- **Incremental Documentation**: When you figure out a complex part of the system or a non-obvious relationship between components, create a file in `./docs/` explaining it.
- **Self-Correction**: If you find that the existing documentation in `./docs/` or this file is out of date based on the current code, fix it immediately.
- **Index Maintenance**: Ensure `./docs/index.md` always points to all relevant documentation so it's easy to get an overview.
- **Personal Standards**: Before starting significant work, refer to `~/prompts/index.md` to see my preferences for `javascript-writing-style.md`, `good-testing-patterns.md`, etc.
- **Evolutionary Feedback**: If you notice me consistently requesting or writing code that contradicts these standards, ask if you should update the global files in `~/prompts/` to match the new pattern.
## 2. Project Context
[User or Agent: Briefly describe the current state and tech stack here to give the agent an immediate starting point]
```
## Making AI a Partner, Not a Guest
The difference between a tool that sees your code for the first time every morning and one that "remembers" your previous architectural decisions is massive.
Its a simple experiment, but it's one that anyone can try today. It turns the agent from a temporary guest in your codebase into a partner that helps you maintain a rolling understanding of what you are actually building.
Would this work for you? I don't know yet, but I'm excited to keep refining it.

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 MiB

View File

@@ -0,0 +1,102 @@
---
title: I gave an AI Root access to my Kubernetes Cluster
pubDate: 2026-01-26
description: 'I gave an AI root access to my Kubernetes cluster. It fixed a BIOS issue I didnt know I had. I am still terrified.'
heroImage: ./assets/cover.png
slug: i-gave-an-ai-root-access-to-my-kubernetes-cluster
color: '#f97316'
---
I have a confession to make. I am a software engineer who insists on self-hosting everything, but I secretly hate being a sysadmin.
Its the classic paradox of the "Home Lab Enthusiast." I want the complex, enterprise-grade Kubernetes cluster. I want the GitOps workflows with ArgoCD. I want the Istio service mesh and the SSO integration with Authentik. It makes me feel powerful.
But you know what I don't want? I don't want to spend my Saturday afternoon grepping through `journalctl` logs to figure out why a random kernel update made my DNS resolver unhappy. I don't want my family watching the popcorn for movie night go stale while I debug file descriptor limits trying to bring the Jellyfin server back online.
So, I did the only logical thing a lazy engineer in 2026 would do: I built an AI agent, gave it root SSH access to my server, and went to sleep.
## The "Security Nightmare" Disclaimer
If you work in InfoSec, you might want to look away now. Or grab a stiff drink.
I have effectively built a system that gives a Large Language Model uncontrolled root access to my physical hardware. It is, generally speaking, a terrible idea. Its like handing a toddler a loaded gun, except the toddler has read the entire internet and knows exactly how to run `rm -rf --no-preserve-root`.
Do not do this on production systems. Do not do this if you value your data. I am doing it so you don't have to.
## The Lazy Architecture
I needed a digital employee. Someone (or something) that could act autonomously, investigate issues, and report back.
I cobbled the system together using **n8n**. Not because it's my absolute favorite tool in the world, but because its the duct tape of the internet—its the easiest way to glue a webhook to a dangerous amount of power.
The team consists of two agents:
1. **The Investigator (Claude Opus 4.5)**: The expensive consultant. It has a massive context window, excellent reasoning, and isn't afraid to dig deep. It handles the "why is everything on fire?" questions.
2. **The Monitor (Gemini Flash v3)**: The intern. Its fast, cheap, and runs daily to check if the fixes the consultant applied are actually working.
### How It Works (Ideally)
1. **The Panic**: I notice my cluster is acting up.
2. **The Delegation**: I open a chat window and type: *"The pods are crash-looping. Fix it."*
3. **The Black Box**: The Investigator spins up, SSHs into the node, runs commands, reads logs, and forms a hypothesis.
4. **The Fix**: It proposes a solution. I say "YOLO" (approve it).
5. **The Watch**: The Monitor keeps an eye on it for a week to make sure it wasn't a fluke.
## The Case of the Toasty CPU
Recently, my server started pulling a vanishing act. No graceful shutdowns, no warnings. Just a hard reset, as if someone had yanked the power cord.
I was stumped. It wasn't the PSU (I checked). It wasn't the RAM (mostly).
So I unleashed the agent. *"My K3s cluster keeps restarting. Figure it out."*
The AI logged in. It poked around the system logs. It checked the hardware sensors. And then, it found the smoking gun that I—the human with the "superior biological brain"—had completely missed.
Here is an excerpt from its report:
> **Root Causes Identified: CPU Thermal Shutdown**
>
> **Critical Finding**: CPU Power Limits were unlocked.
>
> ```
> PL1 (sustained power): 253W (Stock is 65W!)
> PL2 (burst power): 4095W (Unlimited)
> ```
>
> **Analysis**: The BIOS default settings are allowing the CPU to draw 4x its rated TDP. During the heavy Kubernetes boot sequence (starting 134 pods), the CPU hits thermal shutdown before the fans can even spin up.
I stared at the report. `PL2: 4095W`. My BIOS was effectively telling my CPU, "Consume the power of a small sun if you feel like it."
My poor Mini-ITX cooler never stood a chance. The "Performance" default in my motherboard BIOS was silently killing my server every time it tried to boot K3s.
### The Fix
The agent didn't just diagnose it; it offered a solution. It crafted a command to manually force the Intel RAPL (Running Average Power Limit) constraints back to sanity via the `/sys/class/powercap` interface.
```bash
echo 65000000 > /sys/class/powercap/intel-rapl/intel-rapl:0/constraint_0_power_limit_uw
```
I approved it. The agent applied the fix and even wrote a systemd service to make sure the limits persisted after a reboot.
### The Verification
This is where it gets cool. Fixing a bug is easy; knowing it's fixed is hard.
The agent created a **Verification Plan**. It refused to close the ticket until the server had survived **7 days** without an unplanned reboot.
For the next week, the Monitor agent checked in daily. It reported that the "CPU throttle count" dropped from **7,581 events/day** to almost zero. On day 7, I got a notification: *"Issue Resolved. System stable. Closing investigation."*
## The Future is Weird
This system works shockingly well. It feels like having a junior DevOps engineer who works 24/7, never sleeps, and doesn't complain when I ask it to read 5,000 lines of logs.
But its also a glimpse into a weird future. We are moving from "Infrastructure as Code" to **"Infrastructure as Intent."**
I didn't write the YAML to fix the power limit. I didn't write the script to monitor the thermals. I just stated my intent: *"Stop the random reboots."* The AI figured out the implementation details.
Right now, the system is reactive. I have to tell it something is wrong. The next step? **Active Monitoring**. I want the agent to wake up, "feel" the server is running a bit sluggish, and start an investigation before I even wake up.
I might eventually automate myself out of a hobby. But until then, Im enjoying the extra sleep.
(Just maybe don't give it root access to your production database. Seriously.)

View File

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 39 KiB

View File

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

View File

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

Before

Width:  |  Height:  |  Size: 3.1 KiB

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

Before

Width:  |  Height:  |  Size: 3.9 KiB

After

Width:  |  Height:  |  Size: 3.9 KiB

View File

Before

Width:  |  Height:  |  Size: 165 KiB

After

Width:  |  Height:  |  Size: 165 KiB

View File

Before

Width:  |  Height:  |  Size: 4.3 KiB

After

Width:  |  Height:  |  Size: 4.3 KiB

View File

Before

Width:  |  Height:  |  Size: 1.6 MiB

After

Width:  |  Height:  |  Size: 1.6 MiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -0,0 +1,89 @@
---
title: My day is being planned by an algorithm
pubDate: 2022-05-06
description: ''
color: '#e7d9ac'
heroImage: ./assets/cover.png
slug: bob-the-algorithm
---
import { Image } from 'astro:assets'
import TaskBounds from './assets/TaskBounds.png'
import Frame1 from './assets/Frame1.png'
import Graph1 from './assets/Graph1.png'
import Graph2 from './assets/Graph2.png'
Allow me to introduce Bob, an algorithm who has just accepted a role as my assistant.
I'm not very good at planning my day, and the many apps out there that promise to help haven't solved the problem for me. This is usually due to three significant shortcomings:
1. Most day planner apps do what their paper counterparts would do: they record the plan you create. I don't want to make the plan; someone should do that for me.
2. They help you create a plan at the start of the day that you have to follow all day long. My days aren't that static, so my schedule needs to be able to change throughout the day.
3. They can't handle travel time between locations very well.
To solve those issues, I decided that the piece of silicon in my pocket, capable of doing a million calculations a second, should be able to help me do something other than waste time doomscrolling. It should help me get more done throughout the day and free up more time for the things I want to do. That's why I created Bob.
I also wanted a planning algorithm that wasn't solely for productivity. I didn't want to end up in the same situation as poor Kiki in the book *The Circle*, who is driven insane by a planning algorithm that tries to hyper-optimize her day. Bob also needs to plan for downtime.
Bob is still pretty young and learning new things, but he has gotten to the point where I believe he is good enough to use on a day-to-day basis.
<Image src={Frame1} alt='Frame1' />
## How Bob Works
Bob receives a list of tasks. Some are from my calendar (both my work and my personal one), some are from "routines" (which are daily tasks that I want to do most days, such as eating breakfast or picking up the kid), and some tasks come from "goals," which are a list of completable items. These tasks go into Bob, and he tries to create a plan for the next couple of days where I get everything done that I set out to do.
Tasks have a bit more data than your standard calendar events to allow for good scheduling:
* An **earliest start time** and a **latest start time**. These define when the task can be added to the schedule.
* A list of locations where the task can be completed.
* A duration.
* If the task is required.
* A priority.
<Image src={TaskBounds} alt='Task bounds' />
Bob uses a graph walk to create the optimal plan, where each node contains a few different things:
* A list of remaining tasks.
* A list of tasks that are impossible to complete in the current plan.
* A score.
* The current location.
* The present time.
Bob starts by figuring out which locations I can go to to complete the remaining tasks and then creates new leaf nodes for all of those transitions. Next, he figures out if some of the remaining tasks become impossible to complete and when I will arrive at the location, and he calculates a score for that node.
He then gets a list of all the remaining tasks for the current node that can be completed at the current location, again figuring out when I would be done with the task, updating the list of impossible tasks, and scoring the node. If any node adds a required task to the impossible list, that node is considered dead, and Bob will not analyze it further.
<Image src={Graph1} alt='Graph1' />
Now we have a list of active leaves, and from that list, we find the node with the highest score and redo the process from above.
<Image src={Graph2} alt='Graph2' />
Bob has four different strategies for finding a plan:
* **First valid:** This finds the first plan that satisfies all constraints but may lead to non-required tasks getting removed, even though it would be possible to find a plan that included all tasks. This strategy is the fastest and least precise.
* **First complete:** This does the same as "First valid" but only exits early if it finds a plan that includes all tasks. This strategy will generally create pretty good plans but can contain excess travel. If it does not find any plans that contain all tasks, it will switch to the "All valid" strategy.
* **All valid:** This explores all paths until the path is either dead or completed. Then it finds the plan with the highest score. If there are no valid plans, it will switch to the "All" strategy.
* **All:** This explores all paths, even dead ones, and at the end returns the one with the highest score. This strategy allows a plan to be created even if it needs to remove some required tasks.
Scoring is quite simple at the moment, but something I plan to expand on a lot. Currently, the score increases when a task is completed, and it decreases when a task becomes impossible. How much it increases or decreases is influenced by the task's priority and if the task is required. It also decreases based on minutes spent traveling.
The leaf picked for analysis is the one with the highest score. This approach allows the two first strategies to create decent results, though they aren't guaranteed to be the best. It all comes down to how well the scoring variables are tweaked. Currently, they aren't, but at some point, I plan to create a training algorithm for Bob, which will create plans, score them through "All," and then try to tweak the variables to arrive at the correct one with as few nodes analyzed as possible when running the same plan through "First valid"/"First complete."
This approach also allows me to calculate a plan with any start time, so I can re-plan later in the day if I can't follow the original plan or if things get added or removed. So this becomes a tool that helps me get the most out of my day without dictating it.
Bob can also do multi-day planning. Here, he gets a list of tasks for the different days as he usually would and a "shared" list of goals. So he runs the same calculation, adding in the tasks for that day, along with the shared goal list, and everything remaining from the shared list then gets carried over to the next day. This process repeats for all the remaining days.
I have created a proof-of-concept app that houses Bob. I can manage tasks, generate plans, and update my calendar with those plans in this app.
There are also a few features that I want to add later. The most important one is an "asset" system. For instance, when calculating travel, it needs to know if I have brought the bike along because if I took public transit to work, it doesn't make sense to calculate a bike transition later in the day. This system would work by "assets" being tied to a task and location, and then when Bob creates plans, he knows to consider if the asset is there or not. Assets could also be tied to tasks, so one task may be to pick up something, and another is to drop it off. In those cases, assets would act as dependencies, so I have to have picked up the asset before being able to drop it off. The system is pretty simple to implement but causes the graph to grow a lot, so I need to do some optimizations before it makes sense to put it in.
## Conclusion
I have only been using Bob for a few days, but so far, he seems to create good plans and has helped me achieve more productive tasks. He has also scheduled downtime, such as reading, meditation, and playing on the console, ensuring I had time for that in the plan.
There is still a lot of stuff that needs to be done, and I will slowly add features and fix the codebase over time.
You can find the source for this algorithm and the app it lives in at [Github](https://github.com/morten-olsen/bob-the-algorithm), but beware, it is a proof of concept, so readability or maintainability hasn't been a goal.

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 MiB

View File

@@ -0,0 +1,151 @@
---
title: 'The Clubhouse Protocol: A Thought Experiment in Distributed Governance'
pubDate: 2026-01-12
color: '#10b981'
description: 'A napkin sketch for a decentralized messaging protocol where community rules are enforced by cryptography, not moderators.'
heroImage: ./assets/cover.png
slug: clubhouse-protocol
---
I am a huge admirer of the open-source ethos. There is something magical about how thousands of strangers can self-organize to build world-changing software like Linux or Kubernetes. These communities thrive on rough consensus, shared goals, and the freedom to fork if visions diverge.
But there is a disconnect. While we have mastered distributed collaboration for our *code* (Git), the tools we use to *talk* to each other are still stuck in a rigid, hierarchical past.
Even in the healthiest, most democratic Discord server or Slack workspace, the software forces a power imbalance. Technically, one person owns the database, and one person holds the keys. The community remains together because of trust, yes—but the *architecture* treats it like a dictatorship.
## The Problem: Benevolent Dictatorships
Most online communities I am part of are benevolent. The admins are friends, the rules are fair, and everyone gets along. But this peace exists *despite* the software, not because of it.
Under the hood, our current platforms rely on a "superuser" model. One account has the `DELETE` privilege. One account pays the bill. One account owns the data.
This works fine until it doesn't. We have seen it happen with Reddit API changes, Discord server deletions, or just a simple falling out between founders. When the social contract breaks, the one with the technical keys wins. Always.
I call this experiment **The Clubhouse Protocol**. It is an attempt to fix this alignment—to create a "Constitution-as-Code" where the social rules are enforced by cryptography, making the community itself the true owner of the platform.
This post is part of a series of ideas from my backlog—projects I have wanted to build but simply haven't found the time for. I am sharing them now in the hope that someone else becomes inspired, or at the very least, as a mental note to myself if I ever find the time (and skills) to pursue them.
*Disclaimer: I am not a cryptographer. The architecture below is a napkin sketch designed to explore the social dynamics of such a system. The security mechanisms described (especially the encryption ratcheting) are illustrative and would need a serious audit by someone who actually knows what they are doing before writing a single line of production code.*
## The Core Concept
In the Clubhouse Protocol, a "Channel" isn't a row in a database table. It is a shared state defined by a JSON document containing the **Rules**.
These rules define everything:
* Who is allowed to post?
* Who is allowed to invite others?
* What is the voting threshold to change the rules?
Because there is no central server validating your actions, the enforcement happens at the **client level**. Every participant's client maintains a copy of the rules. If someone tries to post a message that violates the rules (e.g., posting without permission), the other clients simply reject the message as invalid. It effectively doesn't exist.
## The Evolution of a Community
To understand why this is powerful, let's look at the lifecycle of a theoretical community.
### Stage 1: The Benevolent Dictator
I start a new channel. In the initial rule set, I assign myself as the "Supreme Owner." I am the only one allowed to post, and I am the only one allowed to change the rules.
I invite a few friends. They can read my posts (because they have the keys), but if they try to post, their clients know it's against the rules, so they don't even try.
### Stage 2: The Republic
I decide I want a conversation, not a blog. So, I construct a `start-vote` message.
* **Proposal:** Allow all members to post.
* **Voting Power:** I have 100% of the votes.
I vote "Yes." The motion passes. The rules update. Now, everyone's client accepts messages from any member.
### Stage 3: The Peaceful Coup
As the community grows, I want to step back. I propose a new rule change:
* **Proposal:** New rule changes require a 51% majority vote from the community.
* **Proposal:** Reduce my personal voting power from 100% to 1 (one person, one vote).
The community votes. It passes.
Suddenly, I am no longer the owner. I am just a member. If I try to ban someone or revert the rules, the community's clients will reject my command because I no longer have the cryptographic authority to do so. The community has effectively seized the means of production (of rules).
## The Architecture
How do we build this without a central server?
### 1. The Message Chain
We need a way to ensure order and prevent tampering.
* A channel starts with three random strings: an `ID_SEED`, a `SECRET_SEED`, and a "Genesis ID" (a fictional previous message ID).
* Each message ID is generated by HMAC'ing the *previous* message ID with the `ID_SEED`. This creates a predictable, verifiable chain of IDs.
* The encryption key for the message **envelope** (metadata) is derived by HMAC'ing the specific Message ID with the `SECRET_SEED`.
This means if you know the seeds, you can calculate the ID of the next message that *should* appear. You can essentially "subscribe" to the future.
### 2. The Envelope & Message Types
The protocol uses two layers of encryption to separate *governance* from *content*.
**The Outer Layer (Channel State):**
This layer is encrypted with the key derived from the `SECRET_SEED`. It contains the message metadata, but crucially, it also contains checksums of the current "political reality":
* Hash of the current Rules
* Hash of the Member List
* Hash of active Votes
This forces consensus. If my client thinks "Alice" is banned, but your client thinks she is a member, our hashes won't match, and the chain will reject the message.
**The Inner Layer (The Payload):**
Inside the envelope, the message has a specific `type`:
* `start-vote` / `cast-vote`: These are visible to everyone in the channel. Governance must be transparent.
* `mutany`: A public declaration of a fork (more on this later).
* `data`: This is the actual chat content. To be efficient, the message payload is encrypted once with a random symmetric key. That key is then encrypted individually for each recipient's public key and attached to the header. This allows the group to remove a member simply by stopping encryption for their key in future messages.
### 3. Storage Agnosticism
Because the security and ordering are baked into the message chain itself, the **transport layer** becomes irrelevant.
You could post these encrypted blobs to a dumb PHP forum, an S3 bucket, IPFS, or even a blockchain. The server doesn't need to know *what* the message is or *who* sent it; it just needs to store a blob of text at a specific ID.
## The Killer Feature: The Mutiny
The most radical idea in this protocol is the **Mutiny**.
In a standard centralized platform, if 45% of the community disagrees with the direction the mods are taking, they have to leave and start a new empty server.
In the Clubhouse Protocol, they can **Fork**.
A `mutiny` message is a special transaction that proposes a new set of rules or a new member list. It cannot be blocked by existing rules.
When a mutiny is declared, it splits the reality of the channel.
* **Group A (The Loyalists)** ignores the mutiny message and continues on the original chain.
* **Group B (The Mutineers)** accepts the mutiny message. Their clients apply the new rules (e.g., removing the tyrannical admin) and continue on a new fork of the chain.
Crucially, **history is preserved**. Both groups share the entire history of the community up until the fork point. Its like `git branch` for social groups. You don't lose your culture; you just take it in a different direction.
## Implementation Challenges
As much as I love this concept, there are significant reasons why it doesn't exist yet.
**The Sybil Problem:** In a system where "one person = one vote," what stops me from generating 1,000 key pairs and voting for myself? The solution lies in the protocol's membership rules. You cannot simply "sign up." An existing member must propose a vote to add your public key to the authorized member list. Until the community votes to accept you, no one will encrypt messages for you, and your votes will be rejected as invalid.
**Scalability & The "Header Explosion":** The encryption method described above (encrypting the content key for every single recipient) hits a wall fast. If you have 1,000 members and use standard RSA encryption, the header alone would be around 250KB *per message*. This protocol is designed for "Dunbar Number" sized groups (under 150 people). To support massive communities, you would need to implement something like **Sender Keys** (used by Signal), where participants share rotating group keys to avoid listing every recipient in every message.
**The "Right to be Forgotten":** In an immutable, crypto-signed message chain, how do you delete a message? You can't. You can only post a new message saying "Please ignore message #123," but the data remains. This is a privacy nightmare and potentially illegal under GDPR.
**Key Management is Hard:** If a user loses their private key, they lose their identity and reputation forever. If they get hacked, there is no "Forgot Password" link to reset it.
**The Crypto Implementation:** As noted in the disclaimer, rolling your own crypto protocol is dangerous. A production version would need to implement proper forward secrecy (like the Signal Protocol) so that if a key is compromised later, all past messages aren't retroactively readable. My simple HMAC chain doesn't provide that.
## Why it matters
Even if the **Clubhouse Protocol** remains a napkin sketch, I think the question it poses is vital: **Who owns the rules of our digital spaces?**
Right now, the answer is "corporations." But as we move toward more local-first and peer-to-peer software, we have a chance to change that answer to "communities."
We need more experiments in **distributed social trust**. We need tools that allow groups to govern themselves, to fork when they disagree, and to evolve their rules as they grow.
If you are a cryptographer looking for a side project, feel free to steal this idea. I just want an invite when it launches.

View File

Before

Width:  |  Height:  |  Size: 1.6 MiB

After

Width:  |  Height:  |  Size: 1.6 MiB

View File

@@ -0,0 +1,36 @@
---
title: How to hire engineers, by an engineer
description: ''
pubDate: 2022-03-16
color: '#8bae8c'
heroImage: ./assets/cover.png
slug: how-to-hire-engineers-by-an-engineer
---
It has been a few years since I have been part of the recruitment process. Still, I recently went through the hiring process myself, so I will mix a bit from both sides for this article to give you some insight from the perspective of a job seeker and from the other side of the table. I'll share what worked, what didn't, and what caused me not to consider a company—because, spoiler alert, engineers are contacted a lot.
First, I need to introduce a hard truth, as it will be the foundation of many of my points and is likely the most important takeaway: **Your company is not unique.**
Unless your tech brand is among the top X in the world, your company alone isn't a selling point. I have been contacted by so many companies that thought because they were leaders in their field or had a "great product," candidates would come banging on their door. If I could disclose all those messages, it would be easy to see that they all say almost the same thing, and chances are your job listing is the same. Sorry. The takeaway from this is that if everything is equal, any misstep in your hiring process can cost you that candidate. So if you are not among the strongest of tech brands, you need to be extremely aware, or you will not fill the position.
Okay, after that slap in the face, we can take a second to look at something...
A lot of people focus on skills when hiring. Of course, the candidate should have the skills for the position, but I will make a case to put less focus on the hard skills and more focus on passion.
Typically, screening for skills through an interview is hard, and techniques like code challenges have their own issues, but more on that later. Screening for passion is easier; you can usually get a good feeling if a candidate is passionate about a specific topic. And passionate people want to learn! So even if the candidate has limited skills, if they have passion, they will learn and outgrow a candidate with experience but no passion.
Filling a team with technical skills can solve an immediate requirement, but companies, teams, and products change. Your requirements will change along with them. Building a passionate team will adjust and evolve, whereas a team consisting of skilled people but without passion will stay where they were when you hired them.
Another issue I see in many job postings is requiring a long list of skills. It would be awesome to find someone skilled in everything who could solve all tasks. In the real world, whenever you add another skill to that list, you are limiting the list of candidates that would fit. So chances are you are not going to find anyone, or the actual skills of any candidate in that very narrow list will be much lower than in a wider pool. A better way is to just add the most important skills and teach the candidate any less important skills on the job. If you hired passionate people, this should be possible. (Remember to screen for passion for learning new things.)
While we're on the expected skill list, a lot of companies have this list of "it would be really nice if you had these skills." Well, those could definitely be framed as learning experiences instead. If you have recruited passionate people, seeing that they will learn new cool skills counts as a plus, and any candidate who already has the skill will see it and think, "Awesome, I am already uniquely suited for the job!"
I promised to talk a bit about code challenges. They can be useful to screen a candidate's ability to just go in and start to work from day one, and if done correctly, they can help a manager organize their process to best suit the team's unique skills, but...
Hiring at the moment is hard! As stated, pretty much any job listing I have seen is identical. So just as in a competitive job market where a small outlier on your resume lands you in the pile never read through, it is just as likely in a competitive hiring market that your listing never gets acted upon.
Engineers are contacted a lot by recruiters, and speaking to all would require a lot of work. So, if a company has a prolonged process, it quickly gets sorted out, especially by the best candidates who most likely get contacted the most and most likely have a full-time job, so time is a scarce resource. So, be aware that if you use time-consuming processes such as the code challenge, you might miss out on the best candidates.
Please just disclose the salary range. From being connected to a few hundred recruiters here on LinkedIn, I can see that this isn't just me but a general issue. As mentioned before, it takes very little to have your listings ignored, and most of your strongest potential candidates already have full-time jobs and would not want to move to a position paying less unless the position were absolutely unique (which, again, yours most likely isn't). Therefore, if you choose not to disclose the salary range, be aware that you will miss out on most of the best candidates. A company will get an immediate "no" from me if they don't disclose the salary range.
Lastly, I have spent a lot of words telling you that your company or position isn't unique, and well, we both know that isn't accurate; your company most likely has something unique to offer, be that soft values or hard benefits. Be sure to put them in your job listing to bring out this uniqueness; it is what is going to set you apart from the other listings. There are a lot of other companies with the same tech stack, using an agile approach, with a high degree of autonomy, with a great team... But what can you offer that no one else can? Get it front and center. Recruiting is marketing, and good copywriting is key.

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 MiB

View File

@@ -0,0 +1,177 @@
---
title: 'Hyperconnect: A Theory of Seamless Device Mesh'
pubDate: 2026-01-12
color: '#3b82f6'
description: 'A theoretical framework for building an Apple-like service mesh that spans WiFi, Bluetooth, and LTE seamlessly.'
heroImage: ./assets/cover.png
slug: hyperconnect
---
Apple's "Continuity" features feel like magic. You copy text on your phone and paste it on your Mac. Your watch unlocks your laptop. It just works. But it only works because Apple owns the entire vertical stack.
For the rest of us living outside the walled garden, device communication is stuck in the 90s. We are still manually pairing Bluetooth or debugging local IP addresses. Why is it harder to send 10 bytes of data to a device three feet away than it is to stream 4K video from a server on the other side of the planet?
I have "ecosystem envy," and I think its time we fixed it. I want to build a service mesh that treats Bluetooth, WiFi, and LTE as mere implementation details, not hard constraints.
"But doesn't Tailscale solve this?" you might ask. Tailscale (and WireGuard) are brilliant technologies that solve the *connectivity* problem by creating a secure overlay network at Layer 3 (IP). However, they don't solve the *continuity* problem. They assume the physical link exists. They can't ask the radio firmware to scan for BLE beacons because the WiFi signal is getting weak.
Similarly, projects like **libp2p** (used by IPFS) do an excellent job of abstracting transport layers for developers, but they function more as a library for building P2P apps rather than a system-wide mesh that handles your text messages and file transfers transparently. I want something that sits deeper—between the OS and the network.
Furthermore, I have a strong distaste for the "walled garden" approach. I don't believe you should have to buy every device from a single manufacturer just to get them to talk to each other reliably. An open-source, vendor-neutral framework would unlock this kind of "hyperconnectivity" for the maker community, allowing us to mix and match hardware without sacrificing that magical user experience.
So, Ive been toying with a concept I call **Hyperconnect**.
If a person is hyperconnected, it generally means they are available on multiple different channels simultaneously. I want to build a framework that allows my devices to do the same.
## The Big Idea
The core idea is to build a framework where all your personal devices create a **device mesh** (distinct from the backend "service mesh" concept often associated with Kubernetes) that can span different protocols. This mesh maintains a live service graph and figures out how to relay messages from one device to another, using different strategies to do so effectively.
This isn't just about failover; it's about context-aware routing.
### The Architecture
To make this work without turning into a security nightmare, we need a few foundational blocks:
#### 1. Passports (Identity)
We can't just let any device talk to the mesh. The user starts by creating an authority private key. This key is used to sign "Passports" for devices. A passport is a cryptographic way for a device to prove, "I belong to Morten, and I am allowed in the mesh."
Crucially, this passport also includes a signed public key for the device. This allows for end-to-end encryption between any two nodes. Even if traffic is relayed through a third device (like the phone), the intermediary cannot read the payload.
#### 2. Lighthouses (Discovery)
How do isolated devices find each other? We need a **Lighthouse**. This is likely a cloud server or a stable home server with a public IP. When a device connects for the first time, it gets introduced through the Lighthouse to find other nodes and build up its local service graph. While an always-available service helps established devices reconnect, the goal is to be as peer-to-peer (P2P) as possible.
#### 3. The Service Graph
Every device advertises the different ways to communicate with it. It might say: "I am available via mDNS on local LAN, I have an LTE modem accessible via this IP, and I accept Bluetooth LE connections"
#### 4. Topology & Gossip
Once introduced, the Lighthouse steps back. The goal is a resilient peer-to-peer network. However, a naive "spaghetti mesh" where everyone gossips with everyone is a battery killer.
Instead, the network forms a **tiered topology**:
* **Anchor Nodes:** Mains-powered devices (NAS, Desktop) maintain the full Service Graph and gossip updates frequently. They act as the stable backbone.
* **Leaf Nodes:** Battery-constrained devices (Watch, Sensor) connect primarily to Anchor Nodes. They typically do not route traffic for others unless acting as a specific bridge (like a Phone acting as an LTE relay).
When a device rejoins the network (e.g., coming home), it doesn't need to check in with the Lighthouse. It simply pings the first known peer it sees (e.g., the Watch sees the Phone). If that peer is authorized, they sync the graph directly. The Lighthouse is merely a fallback for "cold" starts or when no known local peers are visible.
## The Scenario: A Smartwatch in the Wild
To explain how this works in practice, let's look at a specific scenario. Imagine I have a custom smartwatch that connects to a service on my **desktop computer at home** to track my steps.
### Stage 1: At Home
Initially, the watch is connected at home. It publishes its network IP using mDNS. My desktop sees it on the local network. Since the framework prioritizes bandwidth and low latency, the two devices communicate directly over IP.
The watch also knows it has an LTE modem, and it advertises to the Lighthouse that it is reachable there. It also advertises to my Phone that it's available via Bluetooth. The Service Graph is fully populated.
### Stage 2: Leaving the House
Now, it's time to head out. I leave the house, and the local WiFi connection drops.
This is where the framework needs to be smart. It must have a built-in mechanism to handle **backpressure**. For the few seconds I am in the driveway between networks, packets aren't lost; they are captured in a ring buffer (up to a safe memory limit), waiting for the mesh to heal.
The **Connection Owner** (in this case, my desktop, chosen because it has the most compute power and no battery constraints) looks at the graph. It sees the WiFi path is dead. It checks for alternatives. It sees the Watch advertised P2P capabilities over LTE.
The desktop re-establishes the connection over LTE. The buffer flushes. No packets dropped, just slightly delayed.
### Stage 3: The Metro (The Relay)
I head down into the metro. The LTE coverage is spotty, and the smartwatch's tiny antenna can't hold a stable connection to the cell tower. The connection drops again. The buffer starts to fill.
The desktop looks at the Service Graph. Direct IP is gone. LTE is gone. But, it sees that the **Phone** is currently online via 5G (better antenna) and that the Phone has previously reported a Bluetooth relationship with the Watch.
The desktop contacts the Phone: *"Hey, I need a tunnel to the Watch."*
The Phone acts as a relay. It establishes a Bluetooth Low Energy link to the Watch. The data path is now **Desktop ↔ Internet ↔ Phone ↔ Bluetooth ↔ Watch**.
The step counter updates. The mesh survives.
## Beyond the Basics: Strategy and Characteristics
So far, I've mostly talked about the "big three": WiFi, Bluetooth, and LTE. But the real power of a personal mesh comes when we start integrating niche protocols that are usually siloed.
### Expanding the Protocol Stack
Imagine adding **Zigbee** or **Thread** (via Matter) to the mix. These low-power mesh protocols are perfect for stationary home devices. Suddenly, your lightbulbs could act as relay nodes for your smartwatch when you are in the garden, extending the mesh's reach without needing a full WiFi signal.
Or consider **LoRa** (Long Range). I could have a LoRa node on my roof and one in my car. Even if I park three blocks away and the car has no LTE signal, it could potentially ping my home node to report its battery status or location. The bandwidth is tiny, but the range is incredible.
### Connection Characteristics
However, just knowing that a link *exists* isn't enough. The mesh needs to know the *quality* and *cost* of that link. We need to attach metadata to every edge in our service graph.
I believe we need to track at least four dimensions:
1. **Bandwidth:** Can this pipe handle a 1080p stream, or will it choke on a JSON payload?
2. **Latency:** Is this a snappy local WiFi hop (5ms), or a satellite uplink (600ms)?
3. **Energy Cost:** This is critical for battery-powered devices. Waking up the WiFi radio on an ESP32 is expensive. Sending a packet via BLE or Zigbee is much cheaper.
4. **Monetary Cost:** Am I on unlimited home fiber, or am I roaming on a metered LTE connection in Switzerland?
### Smart Routing Strategies
Once the mesh understands these characteristics, the routing logic becomes fascinating. It stops being about "shortest path" and starts being about "optimal strategy."
* **The "Netflix" Strategy:** If I am trying to stream a video file from my NAS to my tablet, the mesh should optimize for **Bandwidth**. It should aggressively prefer WiFi Direct or wired Ethernet, even if it takes a few seconds to negotiate the handshake.
* **The "Whisper" Strategy:** If a temperature sensor needs to report a reading every minute, the mesh should optimize for **Energy**. It should route through the nearest Zigbee node, avoiding the power-hungry WiFi radio entirely.
* **The "Emergency" Strategy:** If a smoke detector goes off, we don't care about energy or money. The mesh should blast the alert out over every available channel—WiFi, LTE, LoRa, Bluetooth—to ensure the message gets through to me.
## The Developer Experience
As a developer, I don't want to manage sockets or handle Bluetooth pairing in my application code. I want a high-level intent-based API.
It might look something like this for a pub/sub pattern:
```typescript
import { mesh } from '@hyperconnect/sdk';
// Subscribe to temperature updates from any node
mesh.subscribe('sensors/temp', (msg) => {
console.log(`Received from ${msg.source}: ${msg.payload}`);
});
// Publish a command with constraints
await mesh.publish('controls/lights', { state: 'ON' }, {
strategy: 'energy_efficient', // Prefer Zigbee/BLE
scope: 'local_network' // Don't route over LTE
});
```
For use cases that require continuous data flow (like video streaming) or legacy application support, the mesh could offer a standard stream interface that handles the underlying transport switching transparently:
```typescript
// Stream-based API (Socket-compatible)
// 'my-nas-server' resolves to a Public Key from the Passport
const stream = await mesh.connect('my-nas-server', 8080, {
strategy: 'high_bandwidth'
});
// Looks just like a standard Node.js socket
stream.write(new Uint8Array([0x01, 0x02]));
stream.on('data', (chunk) => console.log(chunk));
```
There are other massive topics to cover here—like handling delegated guest access (a concept I call 'Visas') or how this becomes the perfect transport layer for Local-First (CRDT) apps—but those deserve their own articles. For now, let's look at the downsides.
## But first, the downsides
I am painting a rosy picture here, but I want to be honest about the challenges.
**Battery Life:** Maintaining multiple radio states and constantly updating a service graph is expensive. A protocol like this needs to be aggressive about sleeping. The "advertising" phase needs to be incredibly lightweight.
**Complexity:** Implementing backpressure handling across different transport layers is hard. TCP handles some of this, but when you are switching from a UDP stream on WiFi to a BLE characteristic, you are effectively rewriting the transport layer logic.
**Security:** While end-to-end encryption (enabled by the keys in the Passport) solves the privacy issue of relaying, implementing a secure cryptographic protocol is notoriously difficult. Ideally, we would need to implement forward secrecy to ensure that if a device key is compromised, past traffic remains secure. That is a heavy lift for a weekend project.
**Platform Restrictions:** Finally, there is the reality of the hardware we carry. Efficiently managing radio handovers requires low-level system access. On open hardware like a Raspberry Pi, this is accessible. However, on consumer devices like iPhones or Android phones, the OS creates a sandbox that restricts direct control over the radios. An app trying to manually toggle network interfaces or scan aggressively in the background will likely be killed by the OS to save battery or prevent background surveillance (like tracking your location via WiFi SSIDs).
## A Call to Build
This is a project I have long wanted to build, but never found the time to.
I am posting this idea hoping it might inspire someone else to take a crack at it. Or, perhaps, this will just serve as documentation for my future self if I ever clear my backlog enough to tackle it.
The dream of a truly hyperconnected personal mesh is vivid. We have the radios, we have the bandwidth, and we have the hardware. We just need the software glue to make it stick.

View File

Before

Width:  |  Height:  |  Size: 1.8 MiB

After

Width:  |  Height:  |  Size: 1.8 MiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -0,0 +1,102 @@
---
title: My Home Runs Redux
pubDate: 2022-03-15
color: '#e80ccf'
description: ''
heroImage: ./assets/cover.png
slug: my-home-runs-redux
---
import graph from './assets/graph.png'
import { Image } from 'astro:assets'
I have been playing with smart homes for a long time. I have used most of the platforms out there, developed quite a few myself, and one thing I keep coming back to is **Redux**.
Those who know what Redux is may find this a weird choice, but for those who don't know Redux, I'll give a brief introduction to get you up to speed.
## What is Redux?
Redux is a state management framework initially built for a React talk by Dan Abramov and is still primarily associated with managing React applications. Redux has a declarative state derived through a "**reducer**" function. This reducer function takes in the current state and an event, and, based on that event, it gives back an updated state. So you have an initial state inside Redux, and then you **dispatch events** into it, each getting the current state and updating it. That means that the resulting state will always be the same given the same set of events.
So why is a framework primarily used to keep track of application state for React-based frontends a good fit for a smart home? Well, your smart home platform most likely closely mimics this architecture already!
## Traditional Smart Home Architecture
First, an event goes in, such as a motion sensor triggering, or you set the bathroom light to 75% brightness in the interface. This event then goes into the platform and hits some automation or routine, resulting in an update request being sent to the correct devices, which then change their state to correspond to the new state.
...But that's not quite what happens on most platforms. Deterministic events may go into the system, but this usually doesn't cause a change to a deterministic state. Instead, it gets dispatched to the device, the device updates, the platform sees this change, and then it updates its state to represent that new state.
This distinction is essential because it comes with a few drawbacks:
* Because the event does not change the state but sends a request to the device that does, everything becomes **asynchronous** and can happen out of order. This behavior can be seen as an issue or a feature, but it does make integrating with it a lot harder from a technical point of view.
* The request is sent to the device as a "**fire-and-forget**" event. It then relies on the success of that request and the subsequent state change to be reported back from the device before the state is updated. This behavior means that if this request fails (something you often see with ZigBee-based devices), the device and the state don't get updated.
* Since the device is responsible for reporting the state change, you are dependent on having that actual device there to make the change. Without sending the changes to the actual device, you cannot test the setup.
So can we create a setup that gets away from these issues?
Another thing to add here is more terminology/philosophy, but most smart home setups are, in my opinion, not really smart, just connected and, to some extent, automated. I want a design that has some actual smartness to it. In this article, I will outline a setup closer to that of the connected, automated home, and at the end, I will give some thoughts on how to take this to the next level and make it smart.
## Adopting a Redux-based Architecture
We know what we want to achieve, and Redux can help us solve this. Remember that Redux takes actions and applies them in a deterministic way to produce a deterministic state.
It's time to go a bit further down the React rabbit hole because another thing from React-land comes in handy here: the concept of **reconciliation**.
Instead of dispatching events to the devices, waiting for them to update and report their state back, we can rely on reconciliation to update our device. For example, let's say we have a device state for our living room light that says it's at 80% brightness in our Redux store. So now we dispatch an event that sets it to 20% brightness.
Instead of sending this event to the device, we update the Redux state.
We have a **state listener** that detects when the state changes and compares it to the state of the actual device. In our case, it seems that the state indicates that the living room light should be at 20% but is, in fact, at 80%, so it sends a request to the actual device to update it to the correct value.
We can also do **scheduled reconciliation** to compare our Redux state to that of the actual devices. If a device fails to update its state after a change, it will automatically get updated on our next scheduled run, ensuring that our smart home devices always reflect our state.
_Sidenote: Yes, of course, I have done a proof of concept using React with a home-built reconciliation that reflected the virtual DOM onto physical devices, just to have had a house that ran React-Redux._
Let's go through our list of issues with how most platforms handle this. We can see that we have eliminated all of them by switching to this Redux-reconciliation approach: we update the state directly to run it synchronously. We can re-run the reconciliation, so failed or dropped device updates get re-run. We don't require any physical devices as our state is directly updated.
We now have a robust, reliable, state management mechanism for our smart home; it's time to add some smarts to it. It is a little outside the article's main focus as this is just my way of doing it; there may be much better ways, so use it at your discretion.
## Adding Intelligence: Intents vs. Events
Redux has the concept of **middlewares**, which are stateful functions that live between the event going into Redux and the reducer updating the state. These middlewares allow Redux to deal with side effects and do event transformations.
Time for another piece of my smart home philosophy: Most smart homes act on events, and I have used the word throughout this article, but to me, events are not the most valuable thing when creating a smart home. Instead, I would argue that the goal is to deal with **intents** rather than events. For instance, an event could be that I started to play a video on the TV. But that states a fact. What we want to do is instead capture what I am trying to achieve, the "intent." So let's split this event into two intents: if the video is less than one hour, I want to watch a TV show; if it is more, I want to watch a movie.
These intents allow us to not deal with weak-meaning events to do complex operations but instead split our concern into two separate concepts: **intent classification** and **intent execution**.
So the last thing we need is a direct way of updating devices, as we cannot capture everything through our intent classifier. For instance, if I sit down to read a book that does not generate any sensor data for our system to react to, I will still need a way to adjust device states manually. (I could add a button that would dispatch a reading intent).
I have separated the events going into Redux into two types:
* **Control events**, which directly control a device.
* **Environment events** represent sensor data coming in (pushing a button, motion sensor triggering, TV playing, etc.).
Now comes the part I have feared, where I need to draw a diagram... sorry.
<Image src={graph} alt='graph' />
So this shows our final setup.
Events go into our Redux setup, either environment or control.
Control events go straight to the reducer, and the state is updated.
Environment events first go to the **intent classifier**, which uses previous events, the current state, and the incoming event to derive the correct intent. The intent then goes into our **intent executor**, which converts the intent into a set of actual device changes, which get sent to our reducer, and the state is then updated.
Lastly, we invoke the reconciliation to update our real devices to reflect our new state.
There we go! Now we have ended up with a self-contained setup. We can run it without the reconciliation or mock it to create tests for our setup and work without changing any real devices, and we can re-run the reconciliation on our state to ensure our state gets updated correctly, even if a device should miss an update.
**Success!!!**
## The Next Level: A "Smart" Home
But I promised to give an idea of how to take this smart home and make it actually "smart."
Let's imagine that we did not want to "program" our smart home. Instead, we wanted to use it; turning the lights on and off using the switches when we entered and exited a room, dimming the lights for movie time, and so on, and over time we want our smart home to pick up on those routines and start to do them for us.
We have a setup where we both have control events and environments coming in. Control events represent how we want the state of our home to be in a given situation. Environment events represent what happened in our home. So we could store those historically with some machine learning and look for patterns.
Let's say you always dim the light when playing a movie that is more than one hour long; your smart home would be able to recognize this pattern and automatically start to do this routine for you.
Would this work? I don't know. I am trying to get more skilled at machine learning to find out.

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

View File

@@ -0,0 +1,203 @@
---
title: Your npm dependencies are plotting against you
subtitle: ...and other cheerful thoughts about JS supply-chain risks
pubDate: 2025-09-19
color: '#e80ccf'
description: ''
heroImage: ./assets/cover.png
slug: node-security
---
_Audience note: This is for developers and DevOps folks who build JS services and havent spent their weekends threat-modeling install hooks for fun. Me neither (... well maybe sometimes). Im not a security pro; this post is opinionated guidance to build intuition—not a substitute for a security professional or a comprehensive secure SDLC program._
Our industry runs on packages published by strangers on the internet at 2 a.m. from a coffee shop WiFi. Thats charming and also quite terrifying. Attackers know this. Compromise of a single npm package or maintainer account can reach developers, CI/CD, servers, and even end users—without touching your codebase.
## The common targets of an npm based supply chain attack
### The developer workstation
**Why attackers love it:** Its an all-you-can-eat buffet of secrets—SSH keys, cloud creds from CLIs, personal access tokens, Git config, npm tokens, Slack/1Password sessions, and your .env files.
**How it gets hit:** npm install scripts and dev-time tooling run with your privileges. A malicious package can scrape env vars, local files, git remotes, or your shell history—then exfiltrate.
### The CI/CD pipeline
**Why attackers love it:** It often has the power to ship to prod. Think deployment keys, cloud credentials, container registry tokens, signing keys, and sometimes global admin perms. Sometimes these are broadly available across pipelines and forks.
**How it gets hit:** Same install-time and run-time execution during builds/tests; pipelines frequently run “npm install && npm test” on code from pull requests. If secrets are exposed to untrusted PR jobs, game over.
### The application server
**Why attackers love it:** Direct lines to databases, queues, internal APIs, and service meshes. Servers often have longer-lived credentials and generous network access. Also a good way to get a foot in the door for further network pivoring.
**How it gets hit:** Runtime attacks via the backend dependency graph—if an imported library goes rogue, it executes with server privileges and can read env vars, connect to internal services, or tamper with logs and telemetry.
### The end user
**Why attackers love it:** Thats where the money (and data) is. Injected frontend code can skim credentials and wallets, hijack sessions, or quietly mine crypto in the background.
**How it gets hit:** Dependency-based injection of malicious JS into your build artifacts or CDN. The browser happily runs whatever you ship.
## How do they get in? Three common attack mechanics
### Install-time attacks (dev and CI/CD)
**What it is:** Abuse of npm lifecycle scripts (preinstall, install, postinstall, prepare) or native binary install hooks. When you run npm install, these scripts execute on the host.
**What it can do:** Read env vars and files, steal tokens from known locations, run network calls to exfiltrate secrets, modify the project (or lockfile), or plant persistence for later runs.
**Why it works:** Install scripts are normal for legitimate packages (building native modules, generating code). The line between “helpful build step” and “exfil script” can be a single curl.
### Runtime attacks (dev, CI, and servers)
**What it is:** Malicious code that executes when your app imports the package, during initialization, or at a hot code path in production. Could be time-bombed, user-conditional, or input-triggered.
**What it can do:** Log scraping, credential harvesting, data exfiltration, lateral movement inside the VPC, monkey-patching core modules, or sabotaging output only under certain conditions (e.g., cloud provider metadata present).
**Why it works:** Transitive dependencies load automatically; tree-shaking doesnt save you if the malicious path is executed; tests run your code too.
### Injection into shipped artifacts (end users)
**What it is:** Malicious code added to your build pipeline or artifacts that reach the browser. Could be a compromised package, a tampered CDN asset, or a poisoned build step.
**What it can do:** Inject script tags, skim forms or wallet interactions, steal JWTs, or swap API endpoints. The browser happily executes whatever came out of your build.
**Why it works:** Frontend bundles are opaque blobs; source maps and integrity checks arent always enforced; many teams rely on third-party scripts or dynamic imports.
## How attackers get that malicious code into your graph (the “entry points”)
- Maintainer account takeover: Password reuse, phishing, token theft, or MFA fatigue on a real maintainers npm/GitHub accounts.
- Typosquatting and lookalikes: left-pad → leftpad, lodash-core vs lodashcore, etc.
- Dependency confusion: Publish a package to the public npm repository with the same name as an internal package for instance `@your-company/important-stuff` - if you install the package without a correct scope configuration you will get the malicius version.
- Compromised build of a legitimate package: Malicious code only in the distributed tarball, not the GitHub source.
- Hijacked release infrastructure: Malicious CI secrets or release scripts in upstream projects.
- Social engineering: “Helpful” PRs that introduce a dependency or tweak scripts.
## Mitigation: Making Your npm Supply Chain a Little More Boring (in a Good Way)
Goal: shrink the blast radius across the four targets (developer, CI/CD, servers, end users) and the three attack mechanics (install-time, runtime, injection). None of this replaces a real secure SDLC or a security professional—but it will dramatically raise the bar.
### 1. Pin Your Dependency Graph and Make Installs Reproducible
- **What to do:**
- Commit your lockfile. Always install with a lockfile-enforcing mode:
- `npm`: `npm ci`
- `pnpm`: `pnpm install --frozen-lockfile`
- `yarn` (Classic): `pnpm install --frozen-lockfile`
- `yarn` (Berry): `yarn install --immutable`
- Enable Corepack and declare the package manager/version in `package.json` to prevent lockfile confusion and mismatched security settings across machines and CI.
- Run `corepack enable`
- Add `"packageManager": "pnpm@x.y.z"` (or `npm`/`yarn`) to `package.json`.
- **Why it helps:** Prevents surprise version drifts, enables static analysis of exactly-whats-installed, and keeps “oops we pulled the bad minor release” from happening mid-build.
### 2. Tame Lifecycle Scripts (Install-Time Attack Surface)
- **What to do:**
- **Default to no install scripts, then allow only whats required:**
- **`pnpm`:** Use [`onlyBuiltDependencies`](https://pnpm.io/settings#onlybuiltdependencies) in `pnpm-workspace.yaml`/.npmrc` to whitelist packages that may run install scripts (great for native modules). You can also set [`strictDepBuilds`](https://pnpm.io/settings#strictdepbuilds) which makes the build fail if it has unreviewed install scripts.
- **`npm`/`yarn`:** Disable scripts by default (`npm config set ignore-scripts true` or `yarn config set enableScripts false`), then run needed scripts explicitly for approved packages (e.g., `npm rebuild <pkg>`).
- For npm/yarn whitelisting at scale, use a maintained helper like LavaMoats [`allow-scripts`](https://www.npmjs.com/package/@lavamoat/allow-scripts) (`npx @lavamoat/allow-scripts`) to manage an explicit allow-list.
- Treat `prepare` scripts as “runs on dev boxes and CI” code—only allow for packages you trust to execute on your machines.
- **Why it helps:** Install hooks are a primary path to dev and CI credential theft. A deny-by-default stance turns “one malicious `preinstall`” into “no-op unless allowlisted.”
### 3. Dont Update Instantly Unless Its a Security Fix
- **What to do:**
- **Delay non-security updates** to let the ecosystem notice regressions or malicious releases:
- **`pnpm (>=10.16.0)`:** Set [`minimumReleaseAge`](https://pnpm.io/settings#minimumreleaseage) in `pnpm-workspace.yaml` or `.npmrc` (e.g., `10080` for 7 days).
- **Renovate:** Use [`minimumReleaseAge`](https://docs.renovatebot.com/configuration-options/#minimumreleaseage) to hold PRs until a package has “aged.”
- If you prefer manual updates, tools like [`taze`](https://www.npmjs.com/package/taze) can help you batch and filter upgrades.
- **Exception:** apply security patches immediately (Dependabot/Renovate security PRs).
- **Why it helps:** Many supply-chain incidents are discovered within a few days. A short delay catches a lot of fallout without leaving you perpetually stale.
### 4. Continuous Dependency Monitoring
- **What to do:**
- Enable GitHub Dependabot alerts and (optionally) security updates.
- Consider a second source like [Snyk](https://snyk.io/), [Trivy](https://trivy.dev/latest/) or [Socket.dev](https://socket.dev/) for malicious-pattern detection beyond CVEs.
- Make `audit` part of CI (`npm audit`, `pnpm audit`, `yarn dlx npm-check-updates + advisories`) but treat results as signals, not gospel.
- **Why it helps:** Quick detection matters; you can roll back or block promotion if an alert fires.
### 5. Secrets: Inject, Scope, and Make Them Short-Lived
- **What to do:**
- **Prefer runtime secret injection over files on disk.** Examples:
- [1Password: `op run -- <your command>`](https://developer.1password.com/docs/cli/reference/commands/run/)
- [with-ssm: `with-ssm -- <your command>`](https://github.com/morten-olsen/with-ssm), disclaimer; made by me)
- Separate secrets available at install vs runtime. Most builds dont need prod creds—dont make them available.
- In CI, use OIDC federation to clouds (e.g., GitHub Actions → AWS/GCP/Azure) for short-lived tokens instead of static long-lived keys. ([AWS](https://docs.github.com/en/actions/how-tos/secure-your-work/security-harden-deployments/oidc-in-aws))
- Never expose prod secrets to PRs from forks. Use GitHub environments with required reviewers and “secrets only on protected branches.”
- **Why it helps:** Even if an attacker runs code, they only get ephemeral, least-privilege creds for that one task—not the keys to the kingdom.
### 6. SSH Keys: Hardware-Backed or at Least in a Secure Agent
- **What to do:**
- [Prefer a hardware token (YubiKey) for SSH and code signing.](https://github.com/drduh/YubiKey-Guide)
- Or use a secure agent: [1Password SSH Agent](https://developer.1password.com/docs/ssh/agent/) or [KeePassXCs SSH agent support](https://keepassxc.org/docs/#faq-ssh-agent-keys).
- Limit key usage to specific hosts, require touch/approval, and avoid storing private keys unencrypted on disk.
- **Why it helps:** Reduces credential theft on dev boxes and narrows lateral movement if a machine is compromised.
### 7. Contain Installs and Runs (Local and CI)
- **What to do:**
- Use containers or ephemeral VMs for dependency installs, builds, and tests.
- Run as a non-root user; prefer read-only filesystems and `tmpfs` for caches.
- Dont mount your whole home directory into the container; mount only whats needed.
- **Consider egress restrictions during install/build:**
- Fetch packages from an internal registry proxy (Artifactory, Nexus, Verdaccio), then block direct outbound network calls from lifecycle scripts.
- Cache packages safely (content-addressed, read-only) to reduce repeated network trust.
- **Why it helps:** Install-time and runtime code sees a minimal, temporary filesystem and limited network—greatly shrinking what it can steal or persist.
### 8. GitHub Org/Repo Hygiene for Secrets and Deployments
- **What to do:**
- Avoid org-wide prod secrets. Prefer per-environment secrets bound to protected branches/environments with required reviewers.
- Use least-privilege `GITHUB_TOKEN` permissions and avoid over-scoped classic PATs.
- Lock down workflows: avoid `pull_request_target` unless youre very sure; keep untrusted PRs in isolated jobs with no secrets.
- Gate deployments (manual approvals, environment protections) and use separate credentials for staging vs prod.
- Consider policy-as-code for repo baselines.
- Handling environment secrets and repo compliance at scale is currently hard to do on GitHub. I am working on a sideproject [git-law](https://github.com/morten-olsen/git-law), but it is not ready for primetime yet. If you know another alternative, please reach out.
- **Why it helps:** Prevents a single compromised developer or workflow from reaching prod with god-mode credentials.
### 9. Frontend Integrity and User Protection
- **What to do:**
- Bundle and self-host third-party scripts when possible. If you must load from a CDN, use Subresource Integrity (`integrity=...`) and pin exact versions.
- Set a strict Content Security Policy with nonces/hashes and disallow `inline`/`eval`. Consider Trusted Types for DOM-sink safety.
- Dont expose secrets to the browser you wouldnt post on a billboard. Assume any injected JS can read what the page can.
- **Why it helps:** Raises the difficulty of injecting, swapping, or skimming scripts in your end users browsers.
### 10. Server-Side Guardrails for Runtime Attacks
- **What to do:**
- Principle of least privilege for app IAM: narrow roles, scoped database users, and service-to-service auth.
- Egress controls and allowlists from app containers to the internet. Alert on unusual destinations.
- Consider Nodes [permission model](https://nodejs.org/api/permissions.html): run with flags that restrict `fs`/`net`/`process` access to what the app needs.
- Centralized logging with egress detection for secrets in logs; treat unexpected DNS/HTTP calls as suspicious.
- **Why it helps:** Even if a dependency misbehaves at runtime, it cant freely scrape the filesystem or exfiltrate to arbitrary endpoints.
### 11. Publish and Consume with Provenance (When You Can)
- **What to do:**
- If you publish packages, use `npm publish --provenance` from CI with signing to attach attestations.
- Prefer dependencies that provide provenance and verifiable builds where possible.
- **Why it helps:** Makes “tarball differs from source” and tampered release pipelines easier to detect.
---
### Quick-Start Recipe (Copy/Paste Friendly)
- **`corepack enable`**, and set `"packageManager"` in `package.json`.
- **Enforce lockfiles in CI**: `npm ci` / `pnpm install --frozen-lockfile` / `yarn install --immutable`.
- **Default-disable lifecycle scripts**; whitelist only required ones (`pnpm onlyBuiltDependencies` or LavaMoat `allow-scripts`).
- **Use `minimumReleaseAge`** (`pnpm`) or Renovate `stabilityDays`; fast-track only security fixes.
- **Turn on Dependabot alerts**; add a second scanner for defense in depth.
- **Inject secrets at runtime** (`op run --` / `with-ssm --`) and use cloud OIDC in CI.
- **Containerize installs/builds**, run as non-root, restrict egress, and use an internal registry proxy.
- **Lock down GitHub environments**; no org-wide prod secrets; restrict secrets from forked PRs.
- **Add CSP + SRI for frontend**; bundle third-party JS.
- **Tighten server IAM, egress, and metadata access**; consider Node permission flags.
## Final thoughts
This journey through the precarious landscape of npm supply-chain security might seem daunting, but remember: the goal isn't perfect impossibility of attack. Instead, by implementing these strategies, you're building a more resilient, defensible system. Each step, from pinning dependencies to taming lifecycle scripts and securing secrets, adds another layer of protection, making your npm supply chain less of a wild frontier and more of a well-guarded stronghold. Stay vigilant, stay updated, and keep building securely!

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.9 MiB

View File

@@ -0,0 +1,234 @@
---
title: A Simple Service Pattern for Node.js
pubDate: 2026-01-09
description: 'A lightweight approach to dependency injection in Node.js without the framework bloat.'
heroImage: ./assets/cover.png
slug: simple-service-pattern
color: '#e80ccf'
---
For a long time, I felt guilty about not using "Real Architecture" in my Node.js side projects.
I wanted to move fast. I didn't want to spend three days setting up modules, decorators, and providers just to build a simple API. I looked at frameworks like NestJS and felt that while powerful, they often felt like buying a semi-truck to carry a bag of groceries.
But the alternative—the "Wild West" of random imports—eventually slows you down too. You hit a wall when you try to write tests, or when a simple script crashes because it accidentally connected to Redis just by importing a file.
This post is for developers who want to keep their development speed high. It introduces a pattern that gives you the biggest benefits of Dependency Injection—testability, clean shutdown, and lazy loading—with minimal effort and zero external libraries.
It is the middle ground that lets you write code fast today, without regretting it tomorrow.
## But first, the downsides
Before I show you the code, I need to be honest about what this is. This pattern is effectively a **Service Locator**.
In pure software architecture circles, the Service Locator pattern is often considered an anti-pattern. Why? Because it hides dependencies. Instead of a class screaming "I NEED A DATABASE" in its constructor arguments, it just asks for the `Services` container and quietly pulls what it needs.
This can make the dependency graph harder to visualize statically. If you are building a massive application with hundreds of services and complex circular dependencies, you might be better off with a robust DI framework that handles dependency resolution graphs for you.
However, I would also offer a gentle challenge: If your dependency graph is so complex that you *need* a heavy framework to manage it, maybe the issue isn't the tool—maybe the service itself is doing too much.
In my experience, keeping services focused and modular often eliminates the need for complex wiring. If your service is simple, your tools can be simple too. For most pragmatic Node.js services, this pattern works surprisingly well.
## The Implementation
Here is the entire container implementation. It relies on standard ES6 Maps and Symbols. It's less than 60 lines of code.
```typescript
const destroy = Symbol('destroy');
const instanceKey = Symbol('instances');
type ServiceDependency<T> = new (services: Services) => T & {
[destroy]?: () => Promise<void> | void;
};
class Services {
[instanceKey]: Map<ServiceDependency<unknown>, unknown>;
constructor() {
this[instanceKey] = new Map();
}
public get = <T>(service: ServiceDependency<T>) => {
if (!this[instanceKey].has(service)) {
this[instanceKey].set(service, new service(this));
}
const instance = this[instanceKey].get(service);
if (!instance) {
throw new Error('Could not generate instance');
}
return instance as T;
};
public set = <T>(service: ServiceDependency<T>, instance: Partial<T>) => {
this[instanceKey].set(service, instance);
};
public destroy = async () => {
await Promise.all(
Array.from(this[instanceKey].values()).map(async (instance) => {
if (
typeof instance === 'object' &&
instance &&
destroy in instance &&
typeof instance[destroy] === 'function'
) {
await instance[destroy]();
}
}),
);
};
}
export { Services, destroy };
```
It is essentially a Singleton-ish Map that holds instances of your classes. When you ask for a class, it checks if it exists. If not, it creates it, passing itself (`this`) into the constructor.
## How to use it
Let's look at a practical example. Say we have a `DatabaseService` that connects to Postgres, and a `PostsService` that needs to query it.
First, the database service.
```typescript
import knex, { type Knex } from 'knex';
class DatabaseService {
#instance: Promise<Knex> | undefined;
#setup = async () => {
const instance = knex({ client: 'pg' /* config */});
await instance.migrate.latest();
return instance;
};
public getInstance = async () => {
// Lazy loading: We don't connect until someone asks for the connection
if (!this.#instance) {
this.#instance = this.#setup();
}
return this.#instance;
};
}
export { DatabaseService };
```
Now, the `PostsService` consumes it:
```typescript
import { Services } from './services';
import { DatabaseService } from './database-service';
class PostsService {
#services: Services;
constructor(services: Services) {
this.#services = services;
}
public getBySlug = async (slug: string) => {
// Resolve the dependency
const databaseService = this.#services.get(DatabaseService);
const database = await databaseService.getInstance();
return database('posts').where({ slug }).first();
};
}
export { PostsService };
```
Notice that we ask for `DatabaseService` inside the `getBySlug` method, not in the constructor. This is intentional. By resolving dependencies at runtime, we preserve lazy loading (the database connection doesn't start until we actually query a post) and we allow for dependencies to be swapped out in the container even after the `PostsService` has been instantiated—a huge plus for testing.
And finally, wiring it up in your application entry point:
```typescript
const services = new Services();
const postsService = services.get(PostsService);
const post = await postsService.getBySlug('hello-world');
console.log(post);
```
## The "CLI" Benefit: Lazy Instantiation
One of the biggest wins here is lazy instantiation.
In many Node.js apps, we tend to initialize everything at startup. You start the app, and it immediately connects to the database, the Redis cache, the RabbitMQ listener, and the email provider.
But what if you are just running a CLI script to rotate some keys? Or a script to seed some test data? You don't want your script to hang because it's trying to connect to a Redis instance that doesn't exist in your local environment.
With this pattern, resources are only initialized when `get()` is called. If your script never asks for the `EmailService`, the `EmailService` never gets created.
## Testing made easy
This is where the `.set()` method shines. Because everything flows through the container, you can intercept requests for heavy services and swap them out for mocks.
```typescript
import { Services } from './services';
import { PostsService } from './posts-service';
import { DatabaseService } from './database-service';
test('it should return a post', async () => {
const services = new Services();
// Inject a mock database service
services.set(DatabaseService, {
getInstance: async () => async () => ({ id: 1, title: 'Test Post' })
});
const postsService = services.get(PostsService);
const post = await postsService.getBySlug('test');
expect(post.title).toBe('Test Post');
});
```
No `jest.mock`, no module swapping magic. Just plain object substitution.
## Graceful Cleanup
Finally, there is that `[destroy]` symbol. Cleaning up resources is often an afterthought, but it is critical for preventing memory leaks and ensuring your tests exit cleanly.
You can implement the destroy interface on any service:
```typescript
import { destroy } from './services';
class DatabaseService {
// ... setup code ...
[destroy] = async () => {
if (this.#instance) {
const db = await this.#instance;
await db.destroy();
}
};
}
```
When your application shuts down, you simply call:
```typescript
process.on('SIGTERM', async () => {
await services.destroy();
process.exit(0);
});
```
This ensures that every service that registered a destroy method gets a chance to clean up its connections.
## Summary
This isn't a silver bullet. If you need complex dependency graphs, lifecycle scopes (request-scoped vs singleton-scoped), or rigid interface enforcement, there are better options out there.
But if you want:
1. **Zero dependencies**
2. **Lazy loading** out of the box
3. **Simple mocking** for tests
4. A way to **clean up resources**
Then copy-paste that `Services` class into your `utils` folder and give it a spin. Simplicity is often its own reward.

View File

@@ -1,3 +1,22 @@
---
name: Morten Olsen
url: https://mortenolsen.pro
location:
city: Copenhagen
countryCode: dk
profiles:
github:
network:
name: GitHub
username: morten-olsen
url: https://github.com/morten-olsen
linkedin:
network:
name: LinkedIn
username: mortenolsendk
url: https://www.linkedin.com/in/mortenolsendk
---
As a software engineer with a diverse skill set in frontend, backend, and DevOps, I find my greatest satisfaction in unraveling complex challenges and transforming them into achievable solutions. My career has predominantly been in frontend development, but my keen interest and adaptability have frequently drawn me into backend and DevOps roles. I am driven not by titles or hierarchy but by opportunities where I can make a real difference through my work.
In every role, I strive to blend my technical skills with a collaborative spirit, focusing on contributing to team goals and delivering practical, effective solutions. My passion for development extends beyond professional settings; I continually engage in personal projects to explore new technologies and methodologies, keeping my skills sharp and current.

View File

@@ -1,48 +0,0 @@
import type { ResumeSchema } from '@/types/resume-schema.js'
import { Content } from './description.md'
import image from './profile.jpg'
const basics = {
name: 'Morten Olsen',
tagline: "Hi, I'm Morten and I make software 👋",
email: 'fbtijfdq@void.black',
url: 'https://mortenolsen.pro',
image: image.src,
location: {
city: 'Copenhagen',
countryCode: 'DK',
region: 'Capital Region of Denmark'
},
profiles: [
{
network: 'GitHub',
icon: 'mdi:github',
username: 'morten-olsen',
url: 'https://github.com/morten-olsen'
},
{
network: 'LinkedIn',
icon: 'mdi:linkedin',
username: 'mortenolsendk',
url: 'https://www.linkedin.com/in/mortenolsendk'
}
],
languages: [
{
name: 'English',
fluency: 'Conversational'
},
{
name: 'Danish',
fluency: 'Native speaker'
}
]
} satisfies ResumeSchema['basics']
const profile = {
basics,
image,
Content
}
export { profile }

View File

@@ -1,10 +0,0 @@
---
title: Bob the algorithm
link: /articles/bob-the-algorithm
keywords:
- Typescript
- React Native
- Algorithmic
---
`// TODO`

View File

@@ -1,9 +0,0 @@
---
title: Bob the algorithm
link: https://github.com/morten-olsen/mini-loader
keywords:
- Typescript
- Task management
---
`// TODO`

View File

@@ -15,5 +15,8 @@ technologies:
- Apollo
- .Net
- Rust
- Python
- FastAPI
- LangChain
---

View File

@@ -1,13 +0,0 @@
import { getCollection } from 'astro:content'
class Articles {
public find = () => getCollection('articles')
public get = async (slug: string) => {
const collection = await this.find()
return collection.find((entry) => entry.data.slug === slug)
}
}
type Article = Exclude<Awaited<ReturnType<Articles['get']>>, undefined>
export { Articles, type Article }

View File

@@ -0,0 +1,32 @@
import { getCollection, getEntry } from "astro:content";
class Experiences {
public getAll = async () => {
const collection = await getCollection('experiences');
return collection.sort(
(a, b) => new Date(b.data.startDate).getTime() - new Date(a.data.startDate).getTime(),
);
}
public get = async (id: string) => {
const entry = await getEntry('experiences', id);
if (!entry) {
throw new Error(`Experience ${id} not found`);
}
return entry;
}
public getCurrent = async () => {
const all = await this.getAll();
return all.find((experience) => !experience.data.endDate);
}
public getPrevious = async () => {
const all = await this.getAll();
return all.filter((experience) => experience.data.endDate);
}
}
const experiences = new Experiences();
export { experiences }

45
src/data/data.posts.ts Normal file
View File

@@ -0,0 +1,45 @@
import { getCollection, getEntry, type CollectionEntry } from "astro:content";
import { profile } from "./data.profile";
class Posts {
#map = (post: CollectionEntry<'posts'>) => {
const readingTime = Math.ceil(post.body?.split(/\s+/g).length / 200) || 1;
return Object.assign(post, {
readingTime,
jsonLd: {
'@context': 'https://schema.org',
'@type': 'BlogPosting',
headline: post.data.title,
image: post.data.heroImage.src,
datePublished: post.data.pubDate.toISOString(),
keywords: post.data.tags,
inLanguage: 'en-US',
author: {
'@type': 'Person',
name: profile.name,
}
},
});
}
public getPublished = async () => {
const collection = await getCollection('posts');
return collection
.map(this.#map)
.sort(
(a, b) => new Date(b.data.pubDate).getTime() - new Date(a.data.pubDate).getTime(),
)
}
public get = async (id: string) => {
const entry = await getEntry('posts', id);
if (!entry) {
throw new Error(`Entry ${id} not found`)
}
return this.#map(entry);
}
}
const posts = new Posts();
export { posts }

133
src/data/data.profile.ts Normal file
View File

@@ -0,0 +1,133 @@
import { z } from 'astro:content';
import { frontmatter, Content } from '../content/profile/profile.md';
import image from '../content/profile/profile.jpg';
import type { ResumeSchema } from '~/types/resume-json';
import { positionWithTeam } from '~/utils/utils.format';
const schema = z.object({
name: z.string(),
tagline: z.string().optional(),
role: z.string().optional(),
url: z.string(),
contact: z.object({
email: z.string().optional(),
phone: z.string().optional(),
}).optional(),
location: z.object({
city: z.string(),
countryCode: z.string(),
}),
profiles: z.record(z.string(), z.object({
network: z.object({
name: z.string(),
}).optional(),
username: z.string().optional(),
url: z.string(),
})),
image: z.object({
src: z.string(),
format: z.enum(["png", "jpg", "jpeg", "tiff", "webp", "gif", "svg", "avif"]),
width: z.number(),
height: z.number(),
})
});
const data = schema.parse({
...frontmatter,
image: image,
})
const profile = Object.assign(data, {
Content,
getJsonLd: async () => {
const { experiences } = await import('./data.experiences');
const currentExperience = await experiences.getCurrent();
const previousExperiences = await experiences.getPrevious();
return {
'@context': 'https://schema.org',
'@type': 'Person',
id: '#me',
name: data.name,
email: data.contact?.email,
image: data.image.src,
url: data.url,
jobTitle: currentExperience?.data.position,
contactPoint: Object.entries(data.profiles).map(([id, profile]) => ({
'@type': 'ContactPoint',
contactType: id,
identifier: profile.username,
url: profile.url
})),
address: {
'@type': 'PostalAddress',
addressLocality: data.location.city,
// addressRegion: data.profile.basics.location.region,
addressCountry: data.location.countryCode
},
sameAs: Object.values(data.profiles),
hasOccupation: currentExperience && {
'@type': 'EmployeeRole',
roleName: currentExperience.data.position,
startDate: currentExperience.data.startDate.toISOString()
},
worksFor: currentExperience && {
'@type': 'Organization',
name: currentExperience?.data.company.name,
sameAs: currentExperience?.data.company.url
},
alumniOf: previousExperiences.map((w) => ({
'@type': 'Organization',
name: w.data.company.name,
sameAs: w.data.company.url,
employee: {
'@type': 'Person',
hasOccupation: {
'@type': 'EmployeeRole',
roleName: positionWithTeam(w.data.position.name, w.data.position.team),
startDate: w.data.startDate.toISOString(),
endDate: w.data.endDate?.toISOString()
},
sameAs: '#me'
}
}))
}
},
getResumeJson: async (): Promise<ResumeSchema> => {
const { experiences } = await import('./data.experiences');
const { skills } = await import('./data.skills');
const allExperiences = await experiences.getAll();
const allSkills = await skills.getAll();
return {
basics: {
name: data.name,
label: data.role,
image: data.image.src,
email: data.contact?.email,
phone: data.contact?.phone,
url: data.url,
location: data.location && {
city: data.location.city,
countryCode: data.location.countryCode,
},
profiles: Object.entries(data.profiles || {}).map(([id, profile]) => ({
network: profile.network?.name || id,
username: profile.username,
url: profile.url,
}))
},
work: allExperiences.map((experience) => ({
name: experience.data.company.name,
position: positionWithTeam(experience.data.position.name, experience.data.position.team),
url: experience.data.company.url,
startDate: experience.data.startDate.toISOString(),
endDate: experience.data.endDate?.toISOString(),
})),
skills: allSkills.map((skill) => ({
name: skill.data.name,
keywords: skill.data.technologies,
}))
}
}
});
export { profile };

View File

@@ -1,13 +0,0 @@
import { getCollection } from 'astro:content'
class References {
public find = () => getCollection('references')
public get = async (slug: string) => {
const collection = await this.find()
return collection.find((entry) => entry.data.slug === slug)
}
}
type Reference = Exclude<Awaited<ReturnType<References['get']>>, undefined>
export { References, type Reference }

View File

@@ -1,5 +0,0 @@
const site = {
theme: '#30E130'
}
export { site }

View File

@@ -1,13 +1,20 @@
import { getCollection } from 'astro:content'
import { getCollection, getEntry } from "astro:content";
class Skills {
public find = () => getCollection('skills')
public get = async (slug: string) => {
const collection = await this.find()
return collection.find((entry) => entry.data.slug === slug)
public getAll = async () => {
const collection = await getCollection('skills');
return collection;
}
public get = async (id: string) => {
const entry = await getEntry('skills', id);
if (!entry) {
throw new Error(`Could not find skill ${id}`);
}
return entry;
}
}
type Skill = Exclude<Awaited<ReturnType<Skills['get']>>, undefined>
const skills = new Skills();
export { Skills, type Skill }
export { skills }

View File

@@ -1,25 +1,8 @@
import { profile } from '../content/profile/profile.js'
import { type Article, Articles } from './data.articles.js'
import { References } from './data.references.ts'
import { site } from './data.site.ts'
import { Skills } from './data.skills.ts'
import { getJsonLDResume, getJsonResume } from './data.utils.js'
import { Work, type WorkItem } from './data.work.js'
import { posts } from './data.posts';
import { experiences } from './data.experiences';
import { profile } from './data.profile';
import { skills } from './data.skills';
class Data {
public articles = new Articles()
public work = new Work()
public references = new References()
public skills = new Skills()
public profile = profile
public site = site
const data = { posts, experiences, profile, skills };
public getJsonResume = getJsonResume.bind(null, this)
public getJsonLDResume = getJsonLDResume.bind(null, this)
}
const data = new Data()
type Profile = typeof profile
export type { Article, Profile, WorkItem }
export { data, Data }
export { data };

View File

@@ -1,89 +0,0 @@
import type { ResumeSchema } from '@/types/resume-schema.js'
import type { Article, Data } from './data'
const getJsonResume = async (data: Data) => {
const profile = data.profile
const resume = {
basics: profile.basics
} satisfies ResumeSchema
return resume
}
const getArticleJsonLD = async (data: Data, article: Article) => {
const jsonld = {
'@context': 'https://schema.org',
'@type': 'BlogPosting',
headline: article.data.title,
image: article.data.heroImage.src,
datePublished: article.data.pubDate.toISOString(),
keywords: article.data.tags,
inLanguage: 'en-US',
author: {
'@type': 'Person',
name: data.profile.basics.name,
url: data.profile.basics.url
}
}
return jsonld
}
const getJsonLDResume = async (data: Data) => {
const work = await data.work.find()
const currentWork = work.find((w) => !w.data.endDate)
const otherWork = work.filter((w) => w !== currentWork)
const jsonld = {
'@context': 'https://schema.org',
'@type': 'Person',
id: '#me',
name: data.profile.basics.name,
email: data.profile.basics.email,
image: data.profile.basics.image,
url: data.profile.basics.url,
jobTitle: currentWork?.data.position,
contactPoint: data.profile.basics.profiles.map((profile) => ({
'@type': 'ContactPoint',
contactType: profile.network.toLowerCase(),
identifier: profile.username,
url: profile.url
})),
address: {
'@type': 'PostalAddress',
addressLocality: data.profile.basics.location.city,
addressRegion: data.profile.basics.location.region,
addressCountry: data.profile.basics.location.countryCode
},
sameAs: data.profile.basics.profiles.map((profile) => profile.url),
hasOccupation: currentWork && {
'@type': 'EmployeeRole',
roleName: currentWork.data.position,
startDate: currentWork.data.startDate.toISOString()
},
worksFor: currentWork && {
'@type': 'Organization',
name: currentWork?.data.name,
sameAs: currentWork?.data.url
},
alumniOf: otherWork.map((w) => ({
'@type': 'Organization',
name: w.data.name,
sameAs: w.data.url,
employee: {
'@type': 'Person',
hasOccupation: {
'@type': 'EmployeeRole',
roleName: w.data.position,
startDate: w.data.startDate.toISOString(),
endDate: w.data.endDate?.toISOString()
},
sameAs: '#me'
}
}))
}
return jsonld
}
export { getJsonResume, getJsonLDResume, getArticleJsonLD }

View File

@@ -1,12 +0,0 @@
import { getCollection } from 'astro:content'
class Work {
public find = () => getCollection('work')
public get = async (slug: string) => {
const collection = await this.find()
return collection.find((entry) => entry.data.slug === slug)
}
}
type WorkItem = Exclude<Awaited<ReturnType<Work['get']>>, undefined>
export { Work, type WorkItem }

2
src/env.d.ts vendored
View File

@@ -1,2 +0,0 @@
/// <reference path="../.astro/types.d.ts" />
/// <reference types="astro/client" />

View File

@@ -1,175 +0,0 @@
---
import { Picture } from 'astro:assets'
import { render } from 'astro:content';
import { type Article, data } from '@/data/data.js'
import { getArticleJsonLD } from '@/data/data.utils'
import Html from '../html/html.astro'
type Props = {
article: Article
}
const { props } = Astro
const { article } = props
const { Content } = await render(article);
console.log('foo', Content)
---
<Html
title={article.data.title}
description={article.data.description}
jsonLd={getArticleJsonLD(data, article)}
>
<article>
<header>
<h1>
{article.data.title.split(' ').map((word) => <span>{word}</span>)}
</h1>
<a href='/'><h2>By {data.profile.basics.name}</h2></a>
</header>
<Picture
loading='eager'
class='img'
src={article.data.heroImage}
widths={[320, 640, 1024, 1400]}
formats={['avif', 'webp', 'png']}
alt='Cover image'
/>
<div class='content'>
<Content />
</div>
</article>
</Html>
<style lang='less'>
article {
--left-padding: 100px;
display: grid;
letter-spacing: 0.08rem;
font-size: 1rem;
line-height: 2.1rem;
grid-template-columns: 1fr calc(50ch + var(--left-padding)) 2fr;
grid-template-rows: auto;
grid-template-areas:
'. title cover'
'. content cover';
}
article :global(picture) {
grid-area: cover;
position: relative;
}
.img {
max-width: 100%;
height: 100vh;
top: 0;
position: sticky;
object-fit: cover;
object-position: center;
right: 0;
clip-path: polygon(40% 0%, 100% 0%, 100% 100%, 0% 100%, 0% 50%);
}
header {
grid-area: title;
height: 80vh;
display: flex;
justify-content: center;
flex-direction: column;
}
h2 {
font-size: 1.5rem;
font-weight: 300;
color: #fff;
text-transform: uppercase;
margin-top: var(--space-md);
color: #000;
}
h1 {
display: flex;
flex-wrap: wrap;
font-size: 4rem;
line-height: 1;
color: #fff;
text-transform: uppercase;
font-weight: 400;
gap: 1rem;
span {
display: inline-block;
background: red;
padding: 0.5rem 1rem;
}
}
.content {
grid-area: content;
padding: var(--space-xl);
padding-left: var(--left-padding);
:global(img) {
max-width: 100%;
height: auto;
margin-bottom: var(--space-lg);
}
:global(p) {
text-align: justify;
margin-bottom: var(--space-lg);
}
:global(p):first-of-type {
&:first-letter {
font-size: 5rem;
border: 5px solid #000;
float: left;
padding: 0 var(--space-md);
margin-right: 1rem;
line-height: 1;
}
}
}
@media (max-width: 1024px) {
article {
--left-padding: 0;
grid-template-columns: 1fr;
grid-template-areas:
'title'
'cover'
'content';
}
article :global(picture) {
position: absolute;
z-index: -1;
height: 80vh;
}
.img {
clip-path: none;
height: 80vh;
opacity: 0.5;
}
header {
padding: var(--space-xl);
height: 80vh;
}
h1 {
font-size: 2.5rem;
}
h2 {
font-size: 1.2rem;
}
.content {
padding: var(--space-lg);
}
}
</style>

View File

@@ -1,68 +0,0 @@
---
import type { Article } from '@/data/data.js'
import { range } from '@/utils/data'
import Html from '../html/html.astro'
type Props = {
pageNumber: number
pageCount: number
articles: Article[]
}
const { articles, pageNumber, pageCount } = Astro.props
const hasPrev = pageNumber > 1
const hasNext = pageNumber < pageCount
---
<Html title='Articles' description='A list of articles'>
<h1>Articles</h1>
{
articles.map((article) => (
<div>
<h2>{article.data.title}</h2>
<p>{article.data.description}</p>
</div>
))
}
<nav>
<a aria-disabled={!hasPrev} href={`/articles/pages/${pageNumber - 1}`}
>Previous</a
>
{
range(1, pageCount).map((page) => (
<a
class:list={[page === pageNumber ? 'active' : undefined]}
href={`/articles/pages/${page}`}
>
{page}
</a>
))
}
<a aria-disabled={!hasNext} href={`/articles/pages/${pageNumber + 1}`}>
Next
</a>
</nav>
</Html>
<style lang='less'>
nav {
display: flex;
justify-content: center;
gap: 1rem;
}
a {
color: #0070f3;
text-decoration: none;
}
a.active {
font-weight: bold;
}
a[aria-disabled='true'] {
color: #ccc;
pointer-events: none;
}
</style>

View File

@@ -1,50 +0,0 @@
---
import { data } from '@/data/data.js'
import Article from './articles.item.astro'
type Props = {
class?: string
}
const { class: className, ...rest } = Astro.props
const articleCount = 6
const allArticles = await data.articles.find()
const sortedArticles = allArticles.sort(
(a, b) =>
new Date(b.data.pubDate).getTime() - new Date(a.data.pubDate).getTime()
)
const hasMore = sortedArticles.length > articleCount
const articles = sortedArticles.slice(0, articleCount)
---
<div class:list={['articles', className]} {...rest}>
<h2>Articles</h2>
<div class='items'>
{articles.map((article) => <Article article={article} />)}
</div>
{hasMore && <a href='/articles/pages/1'>View all articles</a>}
</div>
<style lang='less'>
.articles {
display: grid;
gap: var(--space-lg);
h2 {
font-size: var(--font-xl);
}
.items {
display: flex;
flex-wrap: wrap;
gap: var(--space-md);
}
}
@media print {
.articles {
display: none;
}
}
</style>

View File

@@ -1,71 +0,0 @@
---
import { Picture } from 'astro:assets'
import Time from '@/components/time/absolute.astro'
import type { Article } from '@/data/data.js'
import { formatDate } from '@/utils/time.js'
type Props = {
article: Article
}
const { article: item } = Astro.props
---
<a href={`/articles/${item.data.slug}`}>
<article>
<Picture
class='thumb'
alt='thumbnail image'
src={item.data.heroImage}
formats={['avif', 'webp', 'jpeg']}
width={100}
/>
<div class='content'>
<small>
<Time format={formatDate} datetime={item.data.pubDate} />
</small>
<h3>{item.data.title}</h3>
</div>
</article>
</a>
<style lang='less'>
a {
width: 45%;
}
@media (max-width: 768px) {
a {
width: 100%;
}
}
article {
display: flex;
gap: var(--space-md);
}
.thumb {
border-radius: 0.5rem;
width: 100px;
height: 100px;
grid-area: image;
}
.content {
flex: 1;
display: flex;
flex-direction: column;
justify-content: center;
}
h3 {
font-size: var(--font-lg);
margin: 0;
}
small {
color: var(--color-text-light);
font-size: var(--font-sm);
}
</style>

View File

@@ -1,82 +0,0 @@
---
import { Picture } from 'astro:assets'
import { data } from '@/data/data.js'
import Profile from './description.profile.astro'
type Props = {
class?: string
}
const { class: className, ...rest } = Astro.props
const { Content, basics, image } = data.profile
---
<div class:list={['main', className]} {...rest}>
<Picture
class='picture'
alt='Profile Picture'
src={image}
formats={['avif', 'webp', 'jpeg']}
width={230}
/>
<h1>{basics.name}</h1>
<h2>{basics.tagline}</h2>
<div class='description'>
<Content />
</div>
<div class='profiles'>
{basics.profiles.map((profile) => <Profile profile={profile} />)}
</div>
</div>
<style lang='less'>
@media screen and (max-width: 768px) {
.main {
display: flex;
flex-direction: column;
align-items: center;
}
}
.description {
line-height: 1.3rem;
text-align: justify;
:global(p) {
margin-bottom: var(--space-md);
}
}
h1 {
font-size: var(--font-xxl);
font-weight: bold;
letter-spacing: 1px;
}
h2 {
font-size: var(--font-lg);
font-weight: normal;
letter-spacing: 1px;
color: var(--color-text-light);
margin-bottom: var(--space-md);
}
.picture {
border-radius: 0 0 50% 0;
width: 230px;
height: 230px;
clip-path: circle(43%);
shape-outside: border-box;
float: left;
padding: var(--space-md);
}
.profiles {
display: flex;
flex-wrap: wrap;
gap: var(--space-md);
margin-top: var(--space-md);
}
</style>

View File

@@ -1,42 +0,0 @@
---
import { Icon } from 'astro-icon/components'
import type { Profile } from '@/data/data'
type Props = {
profile: Profile['basics']['profiles'][number]
}
const { profile } = Astro.props
---
<a href={profile.url} target='_blank'>
<Icon class='icon' name={profile.icon} />
<div class='network'>{profile.network}</div>
<div class='username'>{profile.username}</div>
</a>
<style lang='less'>
a {
display: grid;
align-items: center;
column-gap: var(--space-sm);
grid-template-columns: auto 1fr;
grid-template-rows: auto auto;
grid-template-areas: 'icon network' 'icon username';
}
.icon {
grid-area: icon;
width: 2rem;
height: 2rem;
}
.network {
grid-area: network;
}
.username {
grid-area: username;
font-weight: bold;
}
</style>

View File

@@ -1,114 +0,0 @@
---
import { data } from '@/data/data'
import Html from '../html/html.astro'
import Articles from './articles/articles.astro'
import Description from './description/description.astro'
import Info from './info/info.astro'
import Skills from './skills/skills.astro'
import Work from './work/work.astro'
const jsonLd = await data.getJsonLDResume()
---
<Html
title={data.profile.basics.name}
description='Landing page'
jsonLd={jsonLd}
>
<div class='wrapper'>
<div class='frontpage'>
<Description class='description' />
<Info class='info' />
<Articles class='articles' />
<Skills class='skills' />
<Work class='work' />
</div>
</div>
</Html>
<style lang='less'>
.wrapper {
--gap: var(--space-xxl);
margin: 0 auto;
width: 100%;
max-width: var(--content-width);
padding: var(--gap);
}
.frontpage {
display: grid;
gap: var(--gap);
grid-template-columns: repeat(4, 1fr);
grid-template-rows: auto;
overflow: hidden;
grid-template-areas:
'info description description description'
'articles articles articles articles'
'skills work work work';
}
.frontpage > * {
position: relative;
&::after {
content: '';
display: block;
height: 0.6px;
background-color: var(--color-border);
position: absolute;
bottom: calc(var(--gap) * -0.5);
left: calc(var(--gap) * -0.5);
right: calc(var(--gap) * -0.5);
}
&::before {
content: '';
display: block;
width: 0.6px;
background-color: var(--color-border);
position: absolute;
bottom: 0px;
top: calc(var(--gap) * -0.5);
bottom: calc(var(--gap) * -0.5);
right: calc(var(--gap) * -0.5);
}
}
.info {
grid-area: info;
break-inside: avoid;
}
.description {
grid-area: description;
break-inside: avoid;
}
.articles {
grid-area: articles;
}
.skills {
grid-area: skills;
}
.work {
grid-area: work;
}
@media (max-width: 768px) {
.wrapper {
--gap: var(--space-lg);
}
.frontpage {
grid-template-columns: 1fr;
grid-template-areas:
'description'
'info'
'articles'
'skills'
'work';
}
}
</style>

Some files were not shown because too many files have changed in this diff Show More