BACK_TO_BLOG
rusttradinginfrastructure

Why Rust for Trading Infrastructure

Joaquin Bejar García 10 April 2026 8_MIN_READ

Trading infrastructure has a hard constraint that most general-purpose stacks struggle with: determinism under load. When the book is processing 2M messages per second and you hit a GC pause, you don't lose a request — you lose the next ten milliseconds of the market. Rust removes an entire class of these failures at the language level.

The three constraints

Production matching engines, risk systems, and market data pipelines all share three non-negotiable constraints:

  1. No unpredictable latency spikes. Stop-the-world garbage collection is disqualifying. So is dynamic allocation on the hot path.
  2. No memory safety bugs. A use-after-free in an order book is a market incident, and incidents in trading cost money in minutes.
  3. Fearless concurrency. Multi-core utilization is how you scale throughput, but data races in a matching engine are worse than crashes.

C++ can satisfy all three, but the burden falls on the engineer — and on the code review process. Rust moves these constraints into the compiler.

What we actually get from Rust

When we rewrote an options pricing engine from Python-on-top-of-C++ to pure Rust, the headline numbers were good: 40x throughput, P99 latency from 2.4ms to 210μs. But the real win was operational:

  • Zero production memory bugs in 18 months of running the new engine
  • Refactors we were afraid to do in the old code became routine — the borrow checker tells you exactly what you broke
  • Onboarding new engineers took days instead of weeks because the type system encodes invariants that used to live in wiki pages

The compiler is the first code reviewer. By the time a PR reaches a human, most of the "did you forget to handle X" questions are already answered.

A concrete example

Here's the core loop of a simplified order book insert in Rust. Notice that the borrow checker guarantees the level lookup and the order insert can never race — no lock, no atomic, no data race.

1pub fn add_order(&mut self, order: Order) -> Vec<Trade> {
2 let level = self
3 .levels
4 .entry(order.price)
5 .or_insert_with(|| Level::new(order.price));
6
7 if order.side == Side::Bid {
8 match_against_asks(&mut self.asks, order, level)
9 } else {
10 match_against_bids(&mut self.bids, order, level)
11 }
12}

In C++, this shape would work but you'd own the invariants. In Rust, the compiler owns them.

Where Rust falls short

Rust is not a universal solvent. Three things slow us down consistently:

  • Async ecosystem fragmentation. Tokio is the default, but the async story is still evolving faster than the rest of the language.
  • Generics-heavy code compiles slowly. An options analytics library with heavy const generics can take 90+ seconds for a clean release build.
  • FFI boundaries are real work. Binding to a vendor C++ SDK is never a 30-minute task.

None of these are blockers. They're the cost of running on the frontier.

What we recommend

For new systems where latency, memory safety, and multi-core throughput matter — matching engines, order books, risk engines, market data — Rust is now the default choice, not a bold bet. For everything else — internal tooling, data analysis, glue code — pick the language that matches the half-life of the problem.

The right question isn't "should we use Rust?" It's "where are our determinism constraints, and which parts of the stack cross them?" Start there.