Solana — Proof of History, Sealevel, and High-Throughput Blockchains
How Solana achieves 50,000+ TPS on a single global state machine — without sharding — using a cryptographic clock, parallel execution, and a pipelined validator architecture.
UPDATED Apr 11 2026 19:321. Overview
When Anatoly Yakovenko published the Solana whitepaper in 2017, his central claim was provocative: you do not need to shard a blockchain to achieve web-scale throughput. Every other high-throughput design at the time either partitioned state (Ethereum's shard roadmap, Near's nightshade) or reduced decentralisation to the point of being a trusted database cluster. Solana's thesis was that the bottleneck was not computation or bandwidth — it was time. Validators waste enormous message-passing overhead just trying to agree on when things happened. Give every node a shared, cryptographically verifiable clock, and consensus collapses to a tiny fraction of its usual cost.
| Metric | Value (2025) | Notes |
|---|---|---|
| Peak TPS (mainnet) | ~65,000 non-vote txs/s | Total including vote txs exceeds 5,000/s average |
| Average fee | <$0.001 per tx | Base fee 5,000 lamports (0.000005 SOL) |
| Slot time | ~400 ms | Block time; probabilistic finality ~1.3s |
| Validator count | ~2,000 active | Nakamoto coefficient ≈ 19–25 |
| Hardware requirement | 12-core CPU, 256 GB RAM, 10 Gbps NIC | Higher than most L1s |
Solana's throughput comes at a real cost. The hardware requirements for validators are substantially higher than Ethereum or Bitcoin, limiting who can participate. The Nakamoto coefficient — the minimum number of validators that could collude to halt or censor the network — sits around 19–25. By comparison, Bitcoin's mining Nakamoto coefficient is approximately 4. Solana's design makes a deliberate engineering trade-off: maximise performance for users while accepting a higher barrier to validator participation.
2. Proof of History
Proof of History (PoH) is the most misunderstood concept in Solana. It is not a consensus mechanism. It is a cryptographic clock — a way for the network to agree on the ordering and approximate timestamps of events without all-to-all message passing.
The VDF construction
PoH is a Verifiable Delay Function (VDF) implemented using SHA-256. The leader node runs a continuous sequential hash chain:
Each evaluation takes a fixed amount of time because the next hash depends on the output of the previous one — it cannot be parallelised. After $N$ iterations, the chain proves that at least $N$ SHA-256 evaluations of sequential work have been performed. An observer who sees $h_N$ knows that the node has been running for at least as long as it takes to compute $N$ sequential hashes.
Events (transactions, block headers) are "stamped" into the chain by inserting them as additional input to the next hash:
This proves the event occurred between tick $n$ and tick $n+1$, establishing a causal order without any clock synchronisation protocol.
Why this matters for consensus
In a traditional BFT protocol like PBFT, every validator must receive and acknowledge messages from a supermajority of other validators before confirming a value. For $n$ validators, each round requires $O(n^2)$ messages. At 2,000 validators, this is 4,000,000 messages per round — catastrophically expensive.
With PoH, the leader node publishes a continuous hash chain that acts as a shared timeline. Other validators do not need to negotiate on ordering — the PoH sequence is the ordering. They only need to verify that the leader's hash chain is valid (fast, parallel) and then cast a single vote on the chain state. This collapses per-round messaging from $O(n^2)$ to $O(n)$.
| Concept | What it is | What it is NOT |
|---|---|---|
| PoH tick | One sequential SHA-256 evaluation | Not a block or a vote |
| PoH sequence | A cryptographic proof of elapsed time and event ordering | Not a consensus decision |
| PoH clock | Enables Tower BFT to skip message rounds | Not a replacement for Tower BFT |
3. Tower BFT
Tower BFT is Solana's consensus protocol, built on top of the PoH clock. It is derived from Practical Byzantine Fault Tolerance (PBFT) but is optimised for Solana's PoH-ordered environment.
Vote structure and lockout
Each validator periodically votes for the highest valid PoH slot it has seen. The critical innovation is the exponential lockout: when a validator votes on slot $s$, it must wait an exponentially growing number of slots before it is allowed to vote on a fork that does not include $s$.
where $v$ is the number of consecutive votes the validator has made for this fork. After 32 votes, the lockout is $2^{32} \approx 4$ billion slots — practically irreversible. This makes switching forks increasingly expensive, not in computation, but in time the validator must wait before it can vote on an alternative.
A validator that violates lockout — votes on a competing fork before its lockout expires — can be slashed: a portion of its staked SOL is burned. This makes Byzantine behaviour not merely unproductive but economically costly. The combination of lockout and slashing achieves safety under the standard BFT assumption: as long as fewer than 1/3 of stake is Byzantine, the chain cannot fork at a depth where any validators have non-expired lockouts.
Finality
A block achieves optimistic confirmation (probabilistic finality) when more than 2/3 of stake has voted for it — typically within 1.3 seconds (roughly 3 slots). Full Tower BFT finality (with lockouts) requires more votes and takes longer, but in practice the network treats optimistic confirmation as final for most purposes.
The combination of PoH ordering and Tower BFT lockout achieves a 400 ms slot time and 1.3 s probabilistic finality — far faster than Ethereum's 12 s slot time and 2-epoch (~15 min) economic finality.
4. Sealevel — Parallel Execution
The Ethereum Virtual Machine (EVM) executes transactions sequentially. One transaction completes before the next begins. This is safe and simple, but it leaves all CPU cores except one idle. Solana's Sealevel runtime enables true parallel transaction execution across multiple cores and, in principle, GPUs.
Account declaration
The key mechanism is simple: every Solana transaction must declare, at submission time, the complete set of accounts it will read and write:
- Read accounts — can be read concurrently by multiple transactions
- Write accounts — require exclusive access; only one transaction at a time
The Sealevel scheduler inspects these account sets and builds a dependency graph. Transactions with no overlapping write accounts are non-conflicting and can execute in parallel. In practice, a block full of diverse DeFi transactions — swaps on different pools, transfers between different accounts — has very few conflicts and achieves close to linear speedup with core count.
Stateless programs
Solana programs (smart contracts) are stateless. All state lives in separate data accounts. A program account contains only executable code; it does not store any data itself. When a program needs to read or write state, it operates on the data accounts passed to it in the transaction. This separation enables Sealevel to reason about parallelism purely from the account list — no analysis of program code is needed.
| Concept | Solana | Ethereum |
|---|---|---|
| Execution model | Parallel (Sealevel) | Sequential (EVM) |
| State location | Separate data accounts | Inside contract storage slots |
| Conflict detection | Declared at submission time | Not applicable (sequential) |
| VM | eBPF (extended Berkeley Packet Filter) | EVM (stack machine) |
The eBPF VM
Solana programs compile to extended Berkeley Packet Filter (eBPF) bytecode. eBPF was originally a Linux kernel technology for high-performance network packet filtering; its register-based architecture is far more amenable to JIT compilation than EVM's stack machine. The Solana runtime JIT-compiles eBPF bytecode on first execution, achieving near-native speed for hot programs.
5. Programs (Smart Contracts)
Solana programs are primarily written in Rust, compiled to eBPF. C is also supported. The dominant framework is Anchor, which provides macros that handle the boilerplate of account validation, serialisation, and error handling.
Core concepts
Cross-Program Invocation (CPI): A program can invoke another program synchronously. The callee runs in a child context with a subset of the caller's accounts. Unlike Ethereum's arbitrary call depth, Solana limits CPI depth to 4 to prevent stack overflows in the fixed-size eBPF stack.
Program Derived Addresses (PDAs): A program can deterministically generate account addresses it "owns" using:
The seeds are arbitrary bytes; the function finds a valid public key that is not on the Ed25519 curve (ensuring no private key exists for it). This means only the program itself can sign for PDA accounts — they are effectively owned by the program. PDAs are used for escrow accounts, pool vaults, metadata accounts, and anywhere a program needs to control funds without a human private key.
SPL Token Program: Solana's fungible token standard lives in a shared system program (the SPL Token Program). All SPL tokens — including USDC, wrapped SOL, and every memecoin — use the same program. Holding a token requires an Associated Token Account (ATA): a PDA derived from your wallet address and the token mint address. This is deterministic, so any program or user can compute where your tokens are stored without asking you.
Anchor counter program — minimal example (Rust)
// Anchor counter program — Rust
// Demonstrates: program structure, account constraints, PDA ownership
use anchor_lang::prelude::*;
declare_id!("Counter111111111111111111111111111111111111");
#[program]
pub mod counter {
use super::*;
/// Initialise a new counter account, owned by this program.
pub fn initialize(ctx: Context<Initialize>) -> Result<()> {
let counter = &mut ctx.accounts.counter;
counter.count = 0;
counter.authority = ctx.accounts.authority.key();
msg!("Counter initialized. Count: {}", counter.count);
Ok(())
}
/// Increment the counter by 1.
pub fn increment(ctx: Context<Increment>) -> Result<()> {
let counter = &mut ctx.accounts.counter;
require!(
counter.authority == ctx.accounts.authority.key(),
CounterError::Unauthorized
);
counter.count = counter.count.checked_add(1)
.ok_or(CounterError::Overflow)?;
msg!("Incremented. Count: {}", counter.count);
Ok(())
}
}
// ── Account structs ────────────────────────────────────────────────────
#[derive(Accounts)]
pub struct Initialize<'info> {
// init: creates the account; payer pays rent
// space: 8 (discriminator) + 32 (pubkey) + 8 (u64) = 48 bytes
#[account(init, payer = authority, space = 48)]
pub counter: Account<'info, Counter>,
#[account(mut)]
pub authority: Signer<'info>,
pub system_program: Program<'info, System>,
}
#[derive(Accounts)]
pub struct Increment<'info> {
// mut: we will write to this account
#[account(mut)]
pub counter: Account<'info, Counter>,
pub authority: Signer<'info>,
}
// ── Data layout ────────────────────────────────────────────────────────
#[account]
pub struct Counter {
pub authority: Pubkey, // 32 bytes — who can increment
pub count: u64, // 8 bytes — current value
}
// ── Errors ─────────────────────────────────────────────────────────────
#[error_code]
pub enum CounterError {
#[msg("Only the authority can increment this counter.")]
Unauthorized,
#[msg("Counter overflow.")]
Overflow,
}
/* ── Key design points ────────────────────────────────────────────────
1. All state is in `Counter` — a separate data account.
The program itself has no mutable storage.
2. Anchor's #[account(init)] creates the account and sets the
program as its owner — only this program can write to it.
3. The authority field enforces access control in program logic,
not in the account ownership model.
4. checked_add() prevents integer overflow (Rust arithmetic is
checked in debug mode but wraps in release — always use
checked_* in smart contracts).
5. For a PDA-owned counter (program signs for it):
seeds = [b"counter", authority.key().as_ref()],
bump — then only this program can modify the account.
*/
6. The Firedancer Validator Client
Solana launched with a single validator client, written in Rust by Solana Labs. In 2022, Jump Crypto — one of the most sophisticated trading firms in crypto — announced they were building a complete, independent reimplementation of the validator in C, called Firedancer.
Why rewrite from scratch?
Having only one client implementation is a serious risk: a single bug in the Rust client could halt the entire network, as happened during Solana's 2021–2022 outages. A second independent implementation means the network keeps running as long as at least one client works correctly. Firedancer also targets dramatically higher throughput:
- Performance target: 1,000,000+ TPS on commodity server hardware
- Network layer: QUIC (UDP-based) replacing TCP for the transaction ingress pipeline, enabling per-connection backpressure and stake-weighted packet dropping at the network layer rather than the application layer
- Architecture: Tile-based pipeline with strict CPU and memory pinning; each "tile" is a single-threaded process handling one stage of the validator pipeline (networking, signature verification, banking, broadcast)
Status
Firedancer reached devnet in 2023 and was deployed to mainnet in phases during 2024. As of early 2025, Frankendancer (Firedancer's networking stack combined with the existing Solana Labs execution engine) runs on approximately 20% of mainnet validators. Full Firedancer — with its own execution engine — is in active deployment.
7. Network Architecture
Solana's high throughput requires innovations at every layer of the networking stack. Four subsystems work in concert.
Gulf Stream — mempool-less forwarding
Ethereum and Bitcoin maintain a mempool: a buffer of unconfirmed transactions waiting to be included in a block. Solana eliminates the mempool entirely. Because the PoH sequence determines the upcoming leader schedule (known up to ~2 epochs, ~4 days in advance), clients and validators can forward transactions directly to the upcoming leader — typically 4 slots ahead. By the time the leader's slot arrives, its transaction buffer is pre-populated, enabling it to begin execution immediately.
Turbine — block propagation
Propagating a large block to 2,000 validators over a peer-to-peer gossip network is slow. Turbine solves this with two ideas borrowed from BitTorrent: erasure coding and a fanout tree.
- The leader shreds each block into ~1.28 KB data packets, adding Reed-Solomon erasure codes so the block can be reconstructed from any 2/3 of shreds
- Shreds propagate in a tree: the leader sends each shred to a small set of validators (the "root" layer), who each forward to another set, and so on
- Each validator only needs to receive and forward a fraction of total shreds, limiting bandwidth per node to approximately 1 Gbps regardless of network size
Cloudbreak — concurrent account I/O
With thousands of transactions executing in parallel, the bottleneck shifts to disk I/O. Cloudbreak is Solana's account state storage layer. It stores accounts as memory-mapped files on SSDs, allowing the OS to manage caching and enabling concurrent reads from multiple threads with minimal locking. New account writes are appended to a write-ahead log and batch-committed to the memory-mapped store.
The four pipeline stages
- Fetch — network thread receives transaction packets via QUIC, applies stake-weighted rate limiting (validators with more stake get higher ingress priority), deduplicates packets
- SigVerify — GPU-accelerated batch verification of Ed25519 signatures; a single high-end GPU can verify ~1,000,000 signatures per second
- Banking — the Sealevel scheduler builds the account dependency graph, schedules parallel threads, and executes transactions; the result is a new account state
- Broadcast — the new block is shredded and propagated via Turbine; the leader also appends the PoH ticks for this slot and begins voting
8. Outages and Criticism
Solana has experienced more network outages than any other major L1. Understanding them honestly is important for evaluating the platform.
September 2021 — 17-hour outage
A bot-driven initial DEX offering (IDO) on Raydium generated a flood of transactions — approximately 400,000 per second, 100× the normal rate. The transaction processing queue consumed all available memory on validators; nodes began running out of RAM and crashing. Because restarts required loading the full account state, recovery was slow. The network coordinated an emergency restart via validator operators on a Discord server — a coordination mechanism that itself raised decentralisation concerns.
Root cause: No rate limiting at the transaction ingress layer. Any connection could flood the network.
Fix: QUIC-based transaction ingress with stake-weighted QoS (Quality of Service) — validator connections get priority proportional to their stake, and raw user connections are rate-limited.
2022 partial outages
Multiple shorter outages in 2022 stemmed from a combination of bugs in the vote transaction handling code and nonce flooding attacks similar to 2021. Each was resolved within a few hours via coordinated validator restarts.
Validator hardware requirements and centralisation
Running a Solana validator requires a dedicated server with:
- 12+ CPU cores (AMD EPYC or similar), 2.8+ GHz
- 256 GB RAM (512 GB recommended for larger ledger history)
- NVMe SSD with 500 MB/s sustained I/O
- 10 Gbps symmetric network connection
This hardware costs approximately $3,000–8,000 to purchase or $500–1,500/month to rent. The Nakamoto coefficient — roughly 19–25 entities controlling 33% of stake — is significantly lower than Ethereum's ~10,000 validator minimum. Solana's foundation holds 2% of all staked SOL; the top 10 validators control around 30% of stake-weighted votes.
Solana's designers made a deliberate bet: optimise for performance now, improve decentralisation over time as hardware costs fall (Moore's Law) and the validator ecosystem matures. Whether this bet will pay off remains genuinely contested. The honest answer is that Solana's centralisation is a real concern — not just FUD — and the network's resilience during an adversarial event has not yet been fully tested.
9. Interactive: Proof of History Chain
The animation below simulates a Proof of History chain. Every 500 ms a new "tick" is added: the previous hash is truncated to show just the first 8 hex characters, fed into SHA-256, and the new hash is displayed. New events are occasionally stamped into the chain. This demonstrates how the chain creates a tamper-evident, ordered sequence of time-stamps.
Chain is running. Each tick = one SHA-256 evaluation. Events stamped into chain are shown in amber.