Geeking Out
The most fun thing about building Hashiverse is that it couldn't be built like any traditional software. It's absolute independence from any sovereign government or business meant that we could not depend on centralised cloud providers or servers or databases. Hell - not even DNS and standard web certificates!
Anyone can modify the source code of the servers or the clients and participate in the Hashiverse protocol, so even the protocol itself has to be immune to bad actors.
Here are some of the implementation details that we particularly enjoy about Hashiverse. Each one points to the source so you can follow the idea straight into the code.
Likes and dislikes count themselves — no central servers required
Counting in a distributed network is disgusting. We turn to statistics to do distributed counting for our feedback mechanisms. Each feedback signal carries its own PoW, and every additional PoW bit doubles the estimated number of feedback clicks. Clients merge feedback from multiple servers by taking the strongest signal per post, then heal weaker peers. The network converges without any coordinator. We don't care about exact counts — at large enough numbers we're in the right statistical ballpark, and it's damned tough to game.
Source: encoded_post_feedback.rs,
post_bundle_feedback_healing.rs
A viral post recruits the entire network as its CDN
When a server sees enough fetch requests for the same content, it starts caching it. The fetching client uploads the data to populate that cache. For a viral post, every server along every client's lookup path becomes a cache node. The more popular the content, the more servers cache it — the network self-scales.
Source: post_bundle_caching.rs
Trending hashtags emerge from thin air
No central counters. Since "trending" means many users are posting, any random server's local view already approximates the global truth. Each server tracks unique authors per hashtag using a tiny probabilistic counter. Counts are homed to the poster's server — so you can only inflate your own server's count, and even then, the same author posting 1,000 times counts as one. This means that making a hashtag trend is almost impossible to game.
Source: dispatch.rs,
hyper_log_log.rs
Forgotten content dies of neglect, not old age
No TTL. No expiry timer. When a server's storage fills up, it evicts the least-recently-accessed content. Popular content gets touched constantly and survives indefinitely. Content nobody reads quietly disappears through natural selection by access pattern.
Source: environment.rs
Healing only happens when someone cares
When a client fetches content from multiple servers and finds one is missing posts that another has, it heals the gap — but only because it was reading. Content nobody reads never gets healed. Content under active consumption gets healed every time a client fetches it. Readership is the replication factor.
Source: post_bundle_healing.rs
Bootstrapping trusts nobody — not even DNS
Three bootstrap domains across three regions, each resolved through three independent DNS-over-HTTPS providers with DNSSEC validation. Results are deduplicated and shuffled. Even then, every peer received must pass full cryptographic verification before the client trusts it. A compromised DNS provider can't inject fake peers because they can't forge the billions of hashes each server identity requires.
Source: dnssec_bootstrap_provider.rs,
peer_tracker.rs
TLS certificates for raw IP addresses — no DNS required
Browsers refuse HTTPS to a bare IP unless the certificate covers that IP. Traditional CAs only issued certs for domain names — which would make every server dependent on DNS. In late 2025, Let's Encrypt began issuing IP certificates. We were holding our breath for this. Each server auto-provisions and renews its own IP certificate in the background. Browser clients can walk the DHT directly by IP over trusted HTTPS.
Source: https_transport_cert_refresher.rs
Your spam filter runs on thermodynamics, not databases
Every RPC envelope carries a proof-of-work solution. Servers reject anything below threshold before even parsing the payload. No CAPTCHA, no rate-limit database, no account verification. The cost is invisible to a legitimate user but bankrupts a spammer. And this PoW additionally serves to strengthen the reputation of the servers to help fend off bad actors.
Source: rpc_request.rs
Sensitive endpoints cost exponentially more to call (and spam)
A routine RPC costs ~65K hashes. Talking to an unknown server costs 4× more. Posting costs 16× baseline. Feedback costs 32×. Creating a server identity costs billions hashes (hours of work). Each layer is scaled precisely to its abuse potential.
Source: config.rs
Cache radius expands like a shockwave
Clients track how far out from the origin servers cached data has propagated. Each cache hit pushes the radius further. Future lookups start beyond the radius, hitting fresh servers instead of hammering the origin. Popular content radiates outward, meaning other clients hit cached data early in their walk. Obscure content stays local and uncached. No configuration, no tuning — the protocol adapts by simple observation.
Source: cache_radius_tracker.rs
Multiple levels of post bucket granularity tame the power law
Year → month → week → day → 6 hours → 1 hour → 15 min → 5 min → 1 min. A user who posts once a month costs one coarse bucket fetch. A user who posts 100 times a day drills into minute-level buckets fetched in parallel. Same protocol, same code, wildly different load profiles handled gracefully.
Source: buckets.rs
Full buckets split — they don't break
When a timeline bucket fills up, it seals. Clients see the flag and drill into finer-grained children. A trending hashtag cascades from hourly to 15-minute to 1-minute buckets as volume spikes, automatically spreading load across different servers.
Source: dispatch.rs,
buckets.rs
One codebase, every platform — for real
hashiverse-lib compiles to native Rust (server) and WebAssembly (browser).
The protocol logic, cryptography, and client API are the same code running in both
environments. The browser loads it in a Web Worker so the main thread never blocks.
When you test the server, you're testing the same code the browser runs.
Source: hashiverse-lib/,
hashiverse-client-wasm/
100-server integration tests finish in seconds
In-memory transport replaces HTTP — zero network latency, no ports, no cleanup. Time runs at 300× real speed, compressing 30 minutes of peer discovery into 6 wall-clock seconds. 100 servers, 3 clients, full bootstrap, post submission, healing, and verification — one process, under 30 seconds.
Source: client_meets_servers.rs
ASICs begone! The PoW chain is self-referential
Five rounds of chained hashing across 17 state-of-the-art hashing algorithms. Each round's algorithm and repetition count are derived from the previous round's output. The sequence is data-dependent — an ASIC designed to be fast at one algorithm gains nothing if the chain routes through five others. We look forward to the day Hashiverse is so successful that custom ASICs become economical enough to thwart it.
Source: pow.rs
Quantum-proof identity, committed today
Every identity embeds commitments to two post-quantum key algorithms (Falcon and Dilithium) alongside the classical Ed25519 key. The upgrade path is baked into every identity in the network — before quantum computers exist. Two independent lattice assumptions; both must break simultaneously. In a nutshell, your Hashiverse identity won't break if quantum computers go mainstream.
Source: keys_post_quantum.rs
No passwords, no email, no problem
No central servers means no password database and no email to spam you with. You manage your own keys — and we give you three ways. Guest mode: zero setup, read-only, just start browsing. Keyphrase: enter a passphrase, same phrase gives same identity on any device. Passkey: your fingerprint or Face ID becomes your key via your device's secure enclave — hardware-backed, synced across devices, zero passwords to remember.
Source: key_locker.rs,
login/
One ciphertext, 32 keys
Posts are encrypted at rest and in transit. No snooping or post-processing by devious servers. A single encrypted post can be decrypted by up to 32 different passphrases — one per context it appears in. The header is self-describing, so any future decrypt reconstructs the exact encryption config from the ciphertext alone.
Source: encryption.rs
Five layers of DDoS defense before you reach application code
Layer 1: PoW on every RPC envelope. Layer 2: per-IP scoring with time decay. Layer 3: per-IP connection cap. Layer 4: kernel-level IP blacklisting when score crosses threshold. Layer 5: transport timeouts for TLS handshake, header read, and body read (Slow Loris defense).
Source: ipset_ddos.rs,
config.rs
Runs on a $5 VPS — by design, not by accident
Compressed LSM-tree storage on disk. Bounded in-memory caches. Lock-striped concurrent writes. Probabilistic counters for trending (64 bytes per hashtag). And the DHT means each server only stores data near its own ID — not the whole network.
Source: environment/,
config.rs
Every seam in the system is a swappable trait
Transport (mem channels vs HTTP vs WASM fetch). Time (real clock vs 300× acceleration). Storage (IndexedDB vs in-memory). DDoS protection (kernel ipset vs in-memory scoring). Key management (Web Crypto enclave vs test stubs). Swap any of them without touching protocol logic. Tests use the cheap ones; production uses the real ones; same code path.
Source: transport/,
time_provider/,
environment/
Browser PoW across isolated Web Workers
The WASM client takes advantage of all your cores by distributing proof-of-work across Web Workers. Each concurrent PoW job is isolated so they can't interfere. Results bridge back from JavaScript Promises to Rust Futures. The browser's main thread never touches a hash and never slows down.
Source: wasm_parallel_pow_generator.rs
Healing wastes zero bandwidth
The client sends a header describing what it has. The server responds with only the IDs it's missing. The client sends exactly those bytes. If the server already has everything, it's a single round-trip of headers.
Source: dispatch.rs
Two compression algorithms, one version byte
LZ4 for RPC packets (speed). Brotli at maximum quality for stored posts (size). A single leading version byte allows algorithm evolution without breaking any existing data. Below a minimum size, compression is skipped entirely — the overhead would exceed the savings.
Source: compression.rs
The recursive bucket visitor never recurses
The timeline walker uses explicit stack frames instead of function recursion. A callback at each level decides whether to drill deeper, skip, or stop. Sparse timelines skip nested levels entirely. Dense timelines drill down only where content exists.
Source: recursive_bucket_visitor.rs