Moving Beyond TPS
Performance is the assumed reason of existence of GSVM. But how do we measure it?
Most blockchains benchmark performance with Transactions Per Second (TPS). While simple, TPS hides deeper limitations in execution design. A system optimized for basic token transfers may break down under real-world workloads.
Real-life Peak Usage
In early 2025, the $TRUMP memecoin launch on Solana highlighted the risk of shared execution bottlenecks:
AMM activity dominated shared resources
Validators like Jito experienced degraded performance
Priority fees surged ~50x
Compute units per block dropped by 50%
Solana didn’t crash, but it buckled. Applications paid the price for contention in a shared execution environment.
To model such scenarios, we evaluate performance through the following lenses:
Compute Units (CUs)
Inspired by Solana’s model but extended to reflect concurrency, hardware acceleration, and execution efficiency. GSVM tracks work done across CPUs, GPUs, and offload engines like SmartNICs. Lock-heavy or non-deterministic workloads are penalized to discourage anti-patterns.
End-to-End Latency
Measured from submission to finality, not just at the execution layer. This matters for applications like games, perps, and real-time systems where jitter or delay breaks UX.
Throughput Under Pressure
Instead of counting empty transfers, we evaluate execution under complex, interdependent workloads, the kind found in AMMs, AI agents, or large multiplayer games.
Last updated
Was this helpful?