3 Commits

Author SHA1 Message Date
Morten Olsen
4235a5bcdf improved the opening for clubhouse protocol 2026-01-12 14:18:43 +01:00
Morten Olsen
67ab1b349d add clubhouse protocol article 2026-01-12 13:44:17 +01:00
Morten Olsen
e0619692fc add hyperconnect article 2026-01-12 11:14:11 +01:00
4 changed files with 328 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 MiB

View File

@@ -0,0 +1,151 @@
---
title: 'The Clubhouse Protocol: A Thought Experiment in Distributed Governance'
pubDate: 2026-01-12
color: '#10b981'
description: 'A napkin sketch for a decentralized messaging protocol where community rules are enforced by cryptography, not moderators.'
heroImage: ./assets/cover.png
slug: clubhouse-protocol
---
I am a huge admirer of the open-source ethos. There is something magical about how thousands of strangers can self-organize to build world-changing software like Linux or Kubernetes. These communities thrive on rough consensus, shared goals, and the freedom to fork if visions diverge.
But there is a disconnect. While we have mastered distributed collaboration for our *code* (Git), the tools we use to *talk* to each other are still stuck in a rigid, hierarchical past.
Even in the healthiest, most democratic Discord server or Slack workspace, the software forces a power imbalance. Technically, one person owns the database, and one person holds the keys. The community remains together because of trust, yes—but the *architecture* treats it like a dictatorship.
## The Problem: Benevolent Dictatorships
Most online communities I am part of are benevolent. The admins are friends, the rules are fair, and everyone gets along. But this peace exists *despite* the software, not because of it.
Under the hood, our current platforms rely on a "superuser" model. One account has the `DELETE` privilege. One account pays the bill. One account owns the data.
This works fine until it doesn't. We have seen it happen with Reddit API changes, Discord server deletions, or just a simple falling out between founders. When the social contract breaks, the one with the technical keys wins. Always.
I call this experiment **The Clubhouse Protocol**. It is an attempt to fix this alignment—to create a "Constitution-as-Code" where the social rules are enforced by cryptography, making the community itself the true owner of the platform.
This post is part of a series of ideas from my backlog—projects I have wanted to build but simply haven't found the time for. I am sharing them now in the hope that someone else becomes inspired, or at the very least, as a mental note to myself if I ever find the time (and skills) to pursue them.
*Disclaimer: I am not a cryptographer. The architecture below is a napkin sketch designed to explore the social dynamics of such a system. The security mechanisms described (especially the encryption ratcheting) are illustrative and would need a serious audit by someone who actually knows what they are doing before writing a single line of production code.*
## The Core Concept
In the Clubhouse Protocol, a "Channel" isn't a row in a database table. It is a shared state defined by a JSON document containing the **Rules**.
These rules define everything:
* Who is allowed to post?
* Who is allowed to invite others?
* What is the voting threshold to change the rules?
Because there is no central server validating your actions, the enforcement happens at the **client level**. Every participant's client maintains a copy of the rules. If someone tries to post a message that violates the rules (e.g., posting without permission), the other clients simply reject the message as invalid. It effectively doesn't exist.
## The Evolution of a Community
To understand why this is powerful, let's look at the lifecycle of a theoretical community.
### Stage 1: The Benevolent Dictator
I start a new channel. In the initial rule set, I assign myself as the "Supreme Owner." I am the only one allowed to post, and I am the only one allowed to change the rules.
I invite a few friends. They can read my posts (because they have the keys), but if they try to post, their clients know it's against the rules, so they don't even try.
### Stage 2: The Republic
I decide I want a conversation, not a blog. So, I construct a `start-vote` message.
* **Proposal:** Allow all members to post.
* **Voting Power:** I have 100% of the votes.
I vote "Yes." The motion passes. The rules update. Now, everyone's client accepts messages from any member.
### Stage 3: The Peaceful Coup
As the community grows, I want to step back. I propose a new rule change:
* **Proposal:** New rule changes require a 51% majority vote from the community.
* **Proposal:** Reduce my personal voting power from 100% to 1 (one person, one vote).
The community votes. It passes.
Suddenly, I am no longer the owner. I am just a member. If I try to ban someone or revert the rules, the community's clients will reject my command because I no longer have the cryptographic authority to do so. The community has effectively seized the means of production (of rules).
## The Architecture
How do we build this without a central server?
### 1. The Message Chain
We need a way to ensure order and prevent tampering.
* A channel starts with three random strings: an `ID_SEED`, a `SECRET_SEED`, and a "Genesis ID" (a fictional previous message ID).
* Each message ID is generated by HMAC'ing the *previous* message ID with the `ID_SEED`. This creates a predictable, verifiable chain of IDs.
* The encryption key for the message **envelope** (metadata) is derived by HMAC'ing the specific Message ID with the `SECRET_SEED`.
This means if you know the seeds, you can calculate the ID of the next message that *should* appear. You can essentially "subscribe" to the future.
### 2. The Envelope & Message Types
The protocol uses two layers of encryption to separate *governance* from *content*.
**The Outer Layer (Channel State):**
This layer is encrypted with the key derived from the `SECRET_SEED`. It contains the message metadata, but crucially, it also contains checksums of the current "political reality":
* Hash of the current Rules
* Hash of the Member List
* Hash of active Votes
This forces consensus. If my client thinks "Alice" is banned, but your client thinks she is a member, our hashes won't match, and the chain will reject the message.
**The Inner Layer (The Payload):**
Inside the envelope, the message has a specific `type`:
* `start-vote` / `cast-vote`: These are visible to everyone in the channel. Governance must be transparent.
* `mutany`: A public declaration of a fork (more on this later).
* `data`: This is the actual chat content. To be efficient, the message payload is encrypted once with a random symmetric key. That key is then encrypted individually for each recipient's public key and attached to the header. This allows the group to remove a member simply by stopping encryption for their key in future messages.
### 3. Storage Agnosticism
Because the security and ordering are baked into the message chain itself, the **transport layer** becomes irrelevant.
You could post these encrypted blobs to a dumb PHP forum, an S3 bucket, IPFS, or even a blockchain. The server doesn't need to know *what* the message is or *who* sent it; it just needs to store a blob of text at a specific ID.
## The Killer Feature: The Mutiny
The most radical idea in this protocol is the **Mutiny**.
In a standard centralized platform, if 45% of the community disagrees with the direction the mods are taking, they have to leave and start a new empty server.
In the Clubhouse Protocol, they can **Fork**.
A `mutiny` message is a special transaction that proposes a new set of rules or a new member list. It cannot be blocked by existing rules.
When a mutiny is declared, it splits the reality of the channel.
* **Group A (The Loyalists)** ignores the mutiny message and continues on the original chain.
* **Group B (The Mutineers)** accepts the mutiny message. Their clients apply the new rules (e.g., removing the tyrannical admin) and continue on a new fork of the chain.
Crucially, **history is preserved**. Both groups share the entire history of the community up until the fork point. Its like `git branch` for social groups. You don't lose your culture; you just take it in a different direction.
## Implementation Challenges
As much as I love this concept, there are significant reasons why it doesn't exist yet.
**The Sybil Problem:** In a system where "one person = one vote," what stops me from generating 1,000 key pairs and voting for myself? The solution lies in the protocol's membership rules. You cannot simply "sign up." An existing member must propose a vote to add your public key to the authorized member list. Until the community votes to accept you, no one will encrypt messages for you, and your votes will be rejected as invalid.
**Scalability & The "Header Explosion":** The encryption method described above (encrypting the content key for every single recipient) hits a wall fast. If you have 1,000 members and use standard RSA encryption, the header alone would be around 250KB *per message*. This protocol is designed for "Dunbar Number" sized groups (under 150 people). To support massive communities, you would need to implement something like **Sender Keys** (used by Signal), where participants share rotating group keys to avoid listing every recipient in every message.
**The "Right to be Forgotten":** In an immutable, crypto-signed message chain, how do you delete a message? You can't. You can only post a new message saying "Please ignore message #123," but the data remains. This is a privacy nightmare and potentially illegal under GDPR.
**Key Management is Hard:** If a user loses their private key, they lose their identity and reputation forever. If they get hacked, there is no "Forgot Password" link to reset it.
**The Crypto Implementation:** As noted in the disclaimer, rolling your own crypto protocol is dangerous. A production version would need to implement proper forward secrecy (like the Signal Protocol) so that if a key is compromised later, all past messages aren't retroactively readable. My simple HMAC chain doesn't provide that.
## Why it matters
Even if the **Clubhouse Protocol** remains a napkin sketch, I think the question it poses is vital: **Who owns the rules of our digital spaces?**
Right now, the answer is "corporations." But as we move toward more local-first and peer-to-peer software, we have a chance to change that answer to "communities."
We need more experiments in **distributed social trust**. We need tools that allow groups to govern themselves, to fork when they disagree, and to evolve their rules as they grow.
If you are a cryptographer looking for a side project, feel free to steal this idea. I just want an invite when it launches.

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 MiB

View File

@@ -0,0 +1,177 @@
---
title: 'Hyperconnect: A Theory of Seamless Device Mesh'
pubDate: 2026-01-12
color: '#3b82f6'
description: 'A theoretical framework for building an Apple-like service mesh that spans WiFi, Bluetooth, and LTE seamlessly.'
heroImage: ./assets/cover.png
slug: hyperconnect
---
Apple's "Continuity" features feel like magic. You copy text on your phone and paste it on your Mac. Your watch unlocks your laptop. It just works. But it only works because Apple owns the entire vertical stack.
For the rest of us living outside the walled garden, device communication is stuck in the 90s. We are still manually pairing Bluetooth or debugging local IP addresses. Why is it harder to send 10 bytes of data to a device three feet away than it is to stream 4K video from a server on the other side of the planet?
I have "ecosystem envy," and I think its time we fixed it. I want to build a service mesh that treats Bluetooth, WiFi, and LTE as mere implementation details, not hard constraints.
"But doesn't Tailscale solve this?" you might ask. Tailscale (and WireGuard) are brilliant technologies that solve the *connectivity* problem by creating a secure overlay network at Layer 3 (IP). However, they don't solve the *continuity* problem. They assume the physical link exists. They can't ask the radio firmware to scan for BLE beacons because the WiFi signal is getting weak.
Similarly, projects like **libp2p** (used by IPFS) do an excellent job of abstracting transport layers for developers, but they function more as a library for building P2P apps rather than a system-wide mesh that handles your text messages and file transfers transparently. I want something that sits deeper—between the OS and the network.
Furthermore, I have a strong distaste for the "walled garden" approach. I don't believe you should have to buy every device from a single manufacturer just to get them to talk to each other reliably. An open-source, vendor-neutral framework would unlock this kind of "hyperconnectivity" for the maker community, allowing us to mix and match hardware without sacrificing that magical user experience.
So, Ive been toying with a concept I call **Hyperconnect**.
If a person is hyperconnected, it generally means they are available on multiple different channels simultaneously. I want to build a framework that allows my devices to do the same.
## The Big Idea
The core idea is to build a framework where all your personal devices create a **device mesh** (distinct from the backend "service mesh" concept often associated with Kubernetes) that can span different protocols. This mesh maintains a live service graph and figures out how to relay messages from one device to another, using different strategies to do so effectively.
This isn't just about failover; it's about context-aware routing.
### The Architecture
To make this work without turning into a security nightmare, we need a few foundational blocks:
#### 1. Passports (Identity)
We can't just let any device talk to the mesh. The user starts by creating an authority private key. This key is used to sign "Passports" for devices. A passport is a cryptographic way for a device to prove, "I belong to Morten, and I am allowed in the mesh."
Crucially, this passport also includes a signed public key for the device. This allows for end-to-end encryption between any two nodes. Even if traffic is relayed through a third device (like the phone), the intermediary cannot read the payload.
#### 2. Lighthouses (Discovery)
How do isolated devices find each other? We need a **Lighthouse**. This is likely a cloud server or a stable home server with a public IP. When a device connects for the first time, it gets introduced through the Lighthouse to find other nodes and build up its local service graph. While an always-available service helps established devices reconnect, the goal is to be as peer-to-peer (P2P) as possible.
#### 3. The Service Graph
Every device advertises the different ways to communicate with it. It might say: "I am available via mDNS on local LAN, I have an LTE modem accessible via this IP, and I accept Bluetooth LE connections"
#### 4. Topology & Gossip
Once introduced, the Lighthouse steps back. The goal is a resilient peer-to-peer network. However, a naive "spaghetti mesh" where everyone gossips with everyone is a battery killer.
Instead, the network forms a **tiered topology**:
* **Anchor Nodes:** Mains-powered devices (NAS, Desktop) maintain the full Service Graph and gossip updates frequently. They act as the stable backbone.
* **Leaf Nodes:** Battery-constrained devices (Watch, Sensor) connect primarily to Anchor Nodes. They typically do not route traffic for others unless acting as a specific bridge (like a Phone acting as an LTE relay).
When a device rejoins the network (e.g., coming home), it doesn't need to check in with the Lighthouse. It simply pings the first known peer it sees (e.g., the Watch sees the Phone). If that peer is authorized, they sync the graph directly. The Lighthouse is merely a fallback for "cold" starts or when no known local peers are visible.
## The Scenario: A Smartwatch in the Wild
To explain how this works in practice, let's look at a specific scenario. Imagine I have a custom smartwatch that connects to a service on my **desktop computer at home** to track my steps.
### Stage 1: At Home
Initially, the watch is connected at home. It publishes its network IP using mDNS. My desktop sees it on the local network. Since the framework prioritizes bandwidth and low latency, the two devices communicate directly over IP.
The watch also knows it has an LTE modem, and it advertises to the Lighthouse that it is reachable there. It also advertises to my Phone that it's available via Bluetooth. The Service Graph is fully populated.
### Stage 2: Leaving the House
Now, it's time to head out. I leave the house, and the local WiFi connection drops.
This is where the framework needs to be smart. It must have a built-in mechanism to handle **backpressure**. For the few seconds I am in the driveway between networks, packets aren't lost; they are captured in a ring buffer (up to a safe memory limit), waiting for the mesh to heal.
The **Connection Owner** (in this case, my desktop, chosen because it has the most compute power and no battery constraints) looks at the graph. It sees the WiFi path is dead. It checks for alternatives. It sees the Watch advertised P2P capabilities over LTE.
The desktop re-establishes the connection over LTE. The buffer flushes. No packets dropped, just slightly delayed.
### Stage 3: The Metro (The Relay)
I head down into the metro. The LTE coverage is spotty, and the smartwatch's tiny antenna can't hold a stable connection to the cell tower. The connection drops again. The buffer starts to fill.
The desktop looks at the Service Graph. Direct IP is gone. LTE is gone. But, it sees that the **Phone** is currently online via 5G (better antenna) and that the Phone has previously reported a Bluetooth relationship with the Watch.
The desktop contacts the Phone: *"Hey, I need a tunnel to the Watch."*
The Phone acts as a relay. It establishes a Bluetooth Low Energy link to the Watch. The data path is now **Desktop ↔ Internet ↔ Phone ↔ Bluetooth ↔ Watch**.
The step counter updates. The mesh survives.
## Beyond the Basics: Strategy and Characteristics
So far, I've mostly talked about the "big three": WiFi, Bluetooth, and LTE. But the real power of a personal mesh comes when we start integrating niche protocols that are usually siloed.
### Expanding the Protocol Stack
Imagine adding **Zigbee** or **Thread** (via Matter) to the mix. These low-power mesh protocols are perfect for stationary home devices. Suddenly, your lightbulbs could act as relay nodes for your smartwatch when you are in the garden, extending the mesh's reach without needing a full WiFi signal.
Or consider **LoRa** (Long Range). I could have a LoRa node on my roof and one in my car. Even if I park three blocks away and the car has no LTE signal, it could potentially ping my home node to report its battery status or location. The bandwidth is tiny, but the range is incredible.
### Connection Characteristics
However, just knowing that a link *exists* isn't enough. The mesh needs to know the *quality* and *cost* of that link. We need to attach metadata to every edge in our service graph.
I believe we need to track at least four dimensions:
1. **Bandwidth:** Can this pipe handle a 1080p stream, or will it choke on a JSON payload?
2. **Latency:** Is this a snappy local WiFi hop (5ms), or a satellite uplink (600ms)?
3. **Energy Cost:** This is critical for battery-powered devices. Waking up the WiFi radio on an ESP32 is expensive. Sending a packet via BLE or Zigbee is much cheaper.
4. **Monetary Cost:** Am I on unlimited home fiber, or am I roaming on a metered LTE connection in Switzerland?
### Smart Routing Strategies
Once the mesh understands these characteristics, the routing logic becomes fascinating. It stops being about "shortest path" and starts being about "optimal strategy."
* **The "Netflix" Strategy:** If I am trying to stream a video file from my NAS to my tablet, the mesh should optimize for **Bandwidth**. It should aggressively prefer WiFi Direct or wired Ethernet, even if it takes a few seconds to negotiate the handshake.
* **The "Whisper" Strategy:** If a temperature sensor needs to report a reading every minute, the mesh should optimize for **Energy**. It should route through the nearest Zigbee node, avoiding the power-hungry WiFi radio entirely.
* **The "Emergency" Strategy:** If a smoke detector goes off, we don't care about energy or money. The mesh should blast the alert out over every available channel—WiFi, LTE, LoRa, Bluetooth—to ensure the message gets through to me.
## The Developer Experience
As a developer, I don't want to manage sockets or handle Bluetooth pairing in my application code. I want a high-level intent-based API.
It might look something like this for a pub/sub pattern:
```typescript
import { mesh } from '@hyperconnect/sdk';
// Subscribe to temperature updates from any node
mesh.subscribe('sensors/temp', (msg) => {
console.log(`Received from ${msg.source}: ${msg.payload}`);
});
// Publish a command with constraints
await mesh.publish('controls/lights', { state: 'ON' }, {
strategy: 'energy_efficient', // Prefer Zigbee/BLE
scope: 'local_network' // Don't route over LTE
});
```
For use cases that require continuous data flow (like video streaming) or legacy application support, the mesh could offer a standard stream interface that handles the underlying transport switching transparently:
```typescript
// Stream-based API (Socket-compatible)
// 'my-nas-server' resolves to a Public Key from the Passport
const stream = await mesh.connect('my-nas-server', 8080, {
strategy: 'high_bandwidth'
});
// Looks just like a standard Node.js socket
stream.write(new Uint8Array([0x01, 0x02]));
stream.on('data', (chunk) => console.log(chunk));
```
There are other massive topics to cover here—like handling delegated guest access (a concept I call 'Visas') or how this becomes the perfect transport layer for Local-First (CRDT) apps—but those deserve their own articles. For now, let's look at the downsides.
## But first, the downsides
I am painting a rosy picture here, but I want to be honest about the challenges.
**Battery Life:** Maintaining multiple radio states and constantly updating a service graph is expensive. A protocol like this needs to be aggressive about sleeping. The "advertising" phase needs to be incredibly lightweight.
**Complexity:** Implementing backpressure handling across different transport layers is hard. TCP handles some of this, but when you are switching from a UDP stream on WiFi to a BLE characteristic, you are effectively rewriting the transport layer logic.
**Security:** While end-to-end encryption (enabled by the keys in the Passport) solves the privacy issue of relaying, implementing a secure cryptographic protocol is notoriously difficult. Ideally, we would need to implement forward secrecy to ensure that if a device key is compromised, past traffic remains secure. That is a heavy lift for a weekend project.
**Platform Restrictions:** Finally, there is the reality of the hardware we carry. Efficiently managing radio handovers requires low-level system access. On open hardware like a Raspberry Pi, this is accessible. However, on consumer devices like iPhones or Android phones, the OS creates a sandbox that restricts direct control over the radios. An app trying to manually toggle network interfaces or scan aggressively in the background will likely be killed by the OS to save battery or prevent background surveillance (like tracking your location via WiFi SSIDs).
## A Call to Build
This is a project I have long wanted to build, but never found the time to.
I am posting this idea hoping it might inspire someone else to take a crack at it. Or, perhaps, this will just serve as documentation for my future self if I ever clear my backlog enough to tackle it.
The dream of a truly hyperconnected personal mesh is vivid. We have the radios, we have the bandwidth, and we have the hardware. We just need the software glue to make it stick.