There are two ways to add privacy to Solana. One of them isn't really privacy.
The first way is a sidechain. A separate network does its own consensus, runs its own validators, holds bridged SOL in its own multisig. You step out of Solana for a while, do private things, step back in. The sidechain trusts itself. If its validators collude, your SOL is gone. The privacy is real but the trust model is a step backwards from where you started.
The second way is a Layer 2. A small validator cohort verifies Groth16 proofs in milliseconds and votes on each one off-chain — but the merkle root, the nullifier set, the validator stake, and the slashing record all live in an on-chain Anchor program. The chain re-verifies every withdrawal proof against its own stored state. Even a fully malicious 10-of-10 cohort cannot approve an invalid withdrawal — they can delay or refuse, never forge. Privacy off-chain. Settlement on-chain. Trust as a fixed point on Solana.
Paraloom is the second kind. This is where it stands today.
What's actually built
Roughly 33,000 lines of Rust. 407 tests passing. Signed multi-platform binaries shipped through Sigstore's keyless flow. The current release is v0.5.0-rc2. There's no token. No presale, no airdrop schedule, no roadmap to one. The thing being built is privacy infrastructure that any Solana app can adopt. The economics are validator stake and protocol fees, not a coin launch.
End-to-end on devnet, the basic loop works: a user deposits SOL into a shielded pool through the on-chain bridge program; transfers move privately inside the pool, consuming old commitments and minting new ones with no link visible on-chain; withdrawals come out to any Solana address that's unlinkable to the original deposit. The full cycle exercises Groth16 proof generation client-side, the on-chain Anchor program for settlement, the BFT cohort for verification consensus, and per-nullifier PDAs that make double-spend impossible at the chain level. It's a small loop and most of paraloom's complexity is making it survive adversarial conditions.
The interesting parts are the places where most privacy systems quietly cut corners.
Range proofs live in the circuit, not in validator code
Earlier this year, the withdrawal circuit got bit-decomposition value range constraints. Each withdrawal proof now constrains the value to fit in 64 bits, in-circuit, by decomposing it into bits and asserting each bit is 0 or 1 plus that they reconstruct to the value. Sounds boilerplate. It isn't.
Without this, a malicious prover could exploit field arithmetic to forge value out of nothing. Groth16 operates over a finite field — the BLS12-381 scalar field — which is enormous, but it's not the same as u64. A clever attacker can prove "I'm withdrawing X" where X reads as a small valid amount under one interpretation and an enormous one under another, by exploiting field overflow. Catch the bug only at the validator layer and you've created an off-circuit security boundary, which is exactly the kind of thing that fails in subtle ways under audit pressure. Prove the range in-circuit and the soundness rests on Groth16, not on validator math.
This is one of those decisions that doesn't ship a feature. It just removes a category of bug.
Withdrawal proofs expire
Every withdrawal proof carries an expiration_slot baked into its public inputs. The on-chain program rejects expired transactions. The window is configurable per submitter — typically around a minute on devnet — but it's bounded.
The motivation is something that gets glossed over in privacy papers: leaked-proof replay. If a withdrawal proof is well-formed and the underlying nullifier hasn't been spent, anyone who obtains the proof bytes can submit it. Nullifier-PDA uniqueness prevents the second submission, but the first one might be the leaked submission, not the legitimate user's. With expiration_slot baked into the public inputs, the leaked proof becomes useless after the window passes. Combined with on-chain nullifier uniqueness, this closes the long-tail vector that people only think about during a postmortem.
Slashing has cryptographic evidence
The BFT cohort runs at a default 7-of-10 threshold, configurable per network. Two events are slashable: equivocation, and persistent unavailability.
Equivocation is what happens when a validator signs two conflicting messages at the same consensus height. It's mathematical proof of fault — not "the cohort voted that you misbehaved." Two valid signatures from your key on two contradictory votes, presented together, are evidence that's verifiable by anyone with your public key. The cohort builds the evidence off-chain; the registry authority submits it as an on-chain transaction; stake transfers from your ValidatorAccount PDA to the bridge vault and your times_slashed counter increments.
Persistent unavailability is fuzzier — missed heartbeats and votes over a configurable window — but the evidence is still aggregated and signed by the cohort before submission. The chain doesn't trust the cohort's claim alone; it just trusts that the cohort agreed.
Underneath both is reputation gating. A validator below the configured reputation threshold simply can't have their withdrawal vote counted. New validators start neutral; reputation moves with observed behaviour. This is a soft layer on top of stake, biasing the cohort toward reliable validators without making stake the only signal — which is what you'd want long-term, where heavy stake from a sleeping operator shouldn't outweigh a smaller, attentive one.
Coordinator failover under thirty seconds
There's a coordinator role that drives BFT round mechanics — collecting votes, computing the threshold, submitting the on-chain tx after consensus. It's not a single point of trust because the cohort and the chain re-verify everything. But it's a single coordination point: if the primary coordinator dies, the network pauses until someone else takes over.
The HA model is active/passive. One coordinator is primary at any time; passives watch the heartbeat and replicate state snapshots. The kill-the-primary scenario test boots three coordinators, kills the primary, and asserts a passive becomes primary with state continuity within 30 seconds. State continuity matters: the new primary picks up mid-round if there were votes in flight, no jobs are lost or double-assigned.
Thirty seconds is the worst case. In practice, failover hits in single-digit seconds when the network is healthy. But "worst case under thirty seconds" is the contract that matters when an SLA is on the line.
Releases are cryptographically attested
Multi-platform binaries — Linux x86_64 and aarch64, macOS Apple Silicon, Windows MSVC — go out signed via Sigstore's keyless flow. Each signature is bound to the specific GitHub Actions workflow and commit that built it. The OIDC token is short-lived (minutes) and the certificate identity has to match a regex that only matches paraloom-labs's own workflows.
The alternative most projects ship is a long-lived signing key. That key is a single point of compromise. If a developer machine gets compromised, or if a CI secret leaks, the leaker can produce signed releases indistinguishable from real ones. Sigstore's keyless flow takes that key out of existence. There's nothing for a developer to lose. CycloneDX SBOM ships with every release; the container image goes to GHCR with the same signature discipline.
This is the kind of thing that makes a difference exactly once — when you need it.
The full module map and the on-chain account structure are in the architecture docs.
What's gating mainnet
Three things, in order of how hard they are to make go away:
The MPC ceremony has to actually run. Groth16 needs a per-circuit trusted setup. Paraloom uses BGM17 phase-2 — a multi-contributor ceremony where one honest participant is enough for soundness, and the transcript chain is publicly verifiable. The tooling is shipped: paraloom-ceremony-contribute, verify, finalize. What's not shipped is the actual coordinated execution with multiple verified contributors. Devnet uses locally generated proving keys for development convenience; mainnet keys must come from a real ceremony. This is gated until it runs and the transcript is public.
External audit. Internal review only so far. The privacy circuits, the Anchor program, the BFT cohort path, the slashing evidence catalog — all of it needs a second set of eyes from a security firm before mainnet stake should hold real value. Audit is a hard prerequisite, not a nice-to-have. Shipping a privacy L2 to mainnet without one is an extraordinary claim that needs extraordinary evidence; we don't have that evidence.
Compute layer output-notes. Private WASM compute is alpha. Wasmtime sandboxing with memory, fuel, and timeout limits — works. Pedersen commitments and Schnorr-style ownership proofs binding outputs to the requester — work. BFT result agreement on output hashes — works. What's not wired is the path from compute output back into the shielded pool as new commitments. This means today's compute is useful for delivered-to-requester results, but insufficient for compositional flows where one private computation feeds into another.
These aren't "soon." Each one is gated until it's genuinely done.
What we got wrong, and what changed
Two things worth being explicit about, because pre-mainnet honesty is the thing this project trades on:
The original consensus design predated reputation gating. The cohort was a flat 7-of-10 with stake-weighted voting and no reputation layer. It worked on a clean network and broke down when validators went stale or unavailable for long stretches — stake stayed but signal quality dropped. Adding reputation gating + slashing evidence took longer than expected and pushed several other items downstream. The rewrite was right; the timeline estimate was wrong.
The compute layer was originally on a faster track than the payment layer. The intent was to ship private compute alongside private payments, with output-notes wired into the shielded pool from day one. What actually happened is that the payment layer's correctness work (range proofs, replay protection, slashing) absorbed engineering capacity, and compute slipped to alpha — execution and result agreement work, but output-notes don't reach the pool. This is the right trade-off in retrospect; if you can't ship correct payments, compute on top of them is a moot point.
The honest version of "where paraloom is" includes both of these. Public schedule slippage on rewrites and on layer prioritization, but not on what's actually correct.
What comes after
In rough order: the ceremony, then external audit, then paraloom-sdk (a typed wrapper around paraloom-core so app developers don't consume the raw crate), then compute output-notes wired through to the merkle root, then a decentralized devnet validator set with multiple operators before any mainnet activation.
Longer-term there's a research track around batch and recursive Groth16 proofs (cheaper amortized verification at scale), and around compute correctness verification — zk-WASM-shaped, where a single verifier checks a proof of correct execution without rerunning the program. Those are post-mainnet questions. The next twelve months are the three gates above plus the SDK.
Want in?
Run a validator on devnet. Reputation accrues now, and mainnet stake will use the same on-chain registry, so a validator with healthy reputation today starts with an advantage when the network goes live. The validator guide walks you through registration, stake, monitoring, and gracefully unregistering before maintenance windows so you don't get slashed for unavailability. Hardware bar is intentionally low: 16 GB RAM, no GPU, a Raspberry Pi 5 works.
Or read the code. Start with programs/paraloom/src/lib.rs and the withdrawal circuit; those two files are where most of the protocol's load-bearing decisions live. The whole repo is MIT-licensed. If you find something wrong, tell us.
Future posts will go deeper on individual subsystems — how reputation gating actually scores validators round by round, what the slashing evidence catalog looks like in practice, the trade-offs in choosing Groth16 over Plonk for this specific shape of circuit, what the MPC ceremony coordination looks like in flight. This one was the map. The terrain comes next.
