Why I Stopped Calling OSRM Over HTTP (and Wrote Rust Bindings Instead)
Some problems aren't about making the HTTP API faster. They're about the HTTP API being the wrong tool entirely.
I needed to generate a dataset of 500 million (origin, destination, travel_time) triplets over French address data to train a travel time embedding model, encoding ~25 million addresses from the Base Adresse Nationale into 64-dimensional vectors where distances approximate real-world road travel times.
At 500M rows, even an optimistic 1ms per HTTP request (localhost, keep-alive, no JSON parsing overhead) puts you at ~139 hours of wall time. That's not a latency problem. That's a fundamental mismatch between the tool and the task. The only path forward was to call OSRM in-process and eliminate the transport layer entirely.
So I wrote osrm-binding: idiomatic Rust bindings that call OSRM's C++ engine in-process, skipping the network entirely.
The Problem with the HTTP API
OSRM ships with a wonderful HTTP server. For most use cases — a web app displaying a route, a backend computing occasional ETAs — it's perfect. But HTTP has costs that compound quickly:
Serialization/deserialization of JSON on every request
TCP overhead even on localhost
Latency floor that's hard to get below ~1ms even with keep-alive
When you need to query 10,000 routes in a tight loop (say, to generate training data or compute a distance matrix for route optimization), you're burning time on the transport layer, not the routing itself.
The alternative is to link against OSRM's C++ library directly and call it in-process. OSRM exposes a C++ API for exactly this. The catch: you need to bridge that API to your language of choice. osrm-binding does that bridge for Rust.
What the Crate Provides
# Cargo.toml
osrm-binding = "0.1.7"
The crate exposes four main capabilities, all through a single OsrmEngine handle:
Engine Initialization
use osrm_binding::{OsrmEngine, Algorithm};
let engine = OsrmEngine::new("/path/to/france-latest.osrm", Algorithm::MLD)
.expect("Failed to initialize OSRM engine");
You choose between MLD (Multi-Level Dijkstra — better for dynamic scenarios like traffic) and CH (Contraction Hierarchies — faster for static graphs). Both are supported.
Route Calculation
use osrm_binding::{RouteRequestBuilder, Point};
let request = RouteRequestBuilder::default()
.points(vec![
Point { longitude: 2.3522, latitude: 48.8566 }, // Paris
Point { longitude: 5.3698, latitude: 43.2965 }, // Marseille
])
.build()
.unwrap();
let result = engine.route(&request).unwrap();
println!("{:?}", result.routes.first().unwrap());
Distance/Duration Table (the workhorse for matrix problems)
This is the call I use most heavily. Given N sources and M destinations, it returns an N×M matrix of durations and distances in one shot:
use osrm_binding::{TableRequest, Point};
let request = TableRequest {
sources: vec![
Point { longitude: 2.3522, latitude: 48.8566 },
],
destinations: vec![
Point { longitude: 5.3698, latitude: 43.2965 }, // Marseille
Point { longitude: 4.8357, latitude: 45.7640 }, // Lyon
],
};
let response = engine.table(&request).unwrap();
println!("{:?}", response.durations);
Simple Route (convenience wrapper)
For when you just need duration and distance between two points without building a full request:
use osrm_binding::Point;
let result = engine.simple_route(
Point { longitude: 2.3522, latitude: 48.8566 },
Point { longitude: 5.3698, latitude: 43.2965 },
).unwrap();
println!("Duration: {}s, Distance: {}m", result.duration, result.distance);
Trip (TSP Solver)
OSRM also ships a trip endpoint that solves approximate TSP. Useful for delivery route optimization:
use osrm_binding::{TripRequest, Point};
let request = TripRequest {
points: vec![
Point { longitude: 2.3522, latitude: 48.8566 },
Point { longitude: 4.8357, latitude: 45.7640 },
Point { longitude: 5.3698, latitude: 43.2965 },
],
};
let trip = engine.trip(&request).unwrap();
println!("{:?}", trip);
Performance: The Reason This Exists
Here are benchmark numbers from cargo bench, computing batches of routes around Paris:
| Scenario | Algorithm | Time |
|---|---|---|
| 10km radius, multiple routes | MLD | ~5.5 ms |
| 100km radius, multiple routes | MLD | ~13.9 ms |
| 10km radius, multiple routes | CH | ~3.9 ms |
| 100km radius, multiple routes | CH | ~6.2 ms |
These are in-process calls with no serialization, no TCP, no JSON parsing. Compare this to a typical localhost HTTP round-trip which rarely goes below 1–2ms per single request — and that's before you factor in JSON decode time on the response.
For my use case — generating 500M training triplets for a metric learning model over French address pairs — this difference isn't about speed, it's about feasibility. At 1ms per HTTP request, 500M rows would take ~139 hours. In-process, with the table API batching multiple destinations per call, the same dataset becomes tractable in a fraction of that time.
Setup: How the Build Works
Here's something worth understanding: you don't install OSRM. cargo build handles everything.
The build.rs script downloads the OSRM v6.0.0 source tarball directly from GitHub, decompresses it into the Cargo OUT_DIR, and builds it via CMake — all automatically, the first time you build your project. The OSRM C++ libraries end up statically linked into your binary. There's no separate installation step, no system-wide OSRM install required.
What you do need are the system libraries that OSRM itself depends on: Boost, TBB, libfmt, libbz2, and a few others. These are standard packages available in any Ubuntu/Debian repo.
Ubuntu 24.04
sudo apt update
sudo apt install build-essential git cmake pkg-config \
libbz2-dev libxml2-dev libzip-dev libboost-all-dev \
lua5.2 liblua5.2-dev libtbb-dev libfmt-dev
Then just cargo build --release. The first build takes a while (it's compiling OSRM from source), but subsequent builds are cached.
Docker
The multi-stage Dockerfile installs build deps, lets cargo build handle the OSRM compilation, then copies the binary into a slim runtime image with only the shared .so files needed at runtime:
FROM rust:1.88.0-bookworm AS builder
WORKDIR /usr/src/app
COPY Cargo.toml Cargo.lock ./
COPY ./src ./src
RUN apt-get update && apt-get install -y --no-install-recommends \
cmake g++ gcc git \
libboost1.81-all-dev libbz2-dev liblua5.4-dev \
libtbb-dev libxml2-dev libzip-dev lua5.4 \
make pkg-config libfmt-dev
RUN cargo build --release
FROM debian:bookworm-slim
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app/target/release/my-bin ./
RUN apt-get update && apt-get install -y --no-install-recommends \
expat libboost-date-time1.81.0 libboost-iostreams1.81.0 \
libboost-program-options1.81.0 libboost-thread1.81.0 \
liblua5.4-0 libtbb12 && \
rm -rf /var/lib/apt/lists/* && ldconfig /usr/local/lib
CMD ["./my-bin"]
Replace my-bin with your binary name. The builder needs the full -dev packages (headers + static libs for compilation), while the runtime image only needs the dynamic .so files.
Preparing Your OSRM Data File
If you haven't preprocessed an OSRM file before, the process is:
# Download an extract (example: France from Geofabrik)
wget https://download.geofabrik.de/europe/france-latest.osm.pbf
# Extract with a profile (car, bicycle, foot)
osrm-extract -p /usr/share/osrm/profiles/car.lua france-latest.osm.pbf
# Contract (for CH algorithm) or partition+customize (for MLD)
osrm-contract france-latest.osrm
# OR for MLD:
# osrm-partition france-latest.osrm
# osrm-customize france-latest.osrm
The resulting .osrm file (along with its companion files in the same directory) is what you pass to OsrmEngine::new.
When to Use This vs. the HTTP Client
osrm-binding |
HTTP client | |
|---|---|---|
| Latency per query | ~0.1ms (in-process) | ~1–5ms (localhost) |
| Throughput | Very high | Moderate |
| Setup complexity | cargo build fetches & compiles OSRM automatically; system libs (Boost, TBB, fmt) needed |
Just needs a running server |
| Deployment | Binary links OSRM statically/dynamically | Separate OSRM process |
| Best for | Batch processing, training data, matrix ops | Web backends, occasional queries |
If you're running OSRM on a separate machine or want to keep routing as an independent service, use the HTTP API. If you're doing heavy batch computation and OSRM can live on the same machine as your process, the binding pays off immediately.
What's Next
The crate is at v0.1.7 and covers the main OSRM endpoints I needed: route, table, trip, and simple_route. OSRM also exposes nearest and match (map matching) endpoints which aren't bound yet — contributions welcome.
The repo is at github.com/mathias-vandaele/osrm-binding. It's MIT licensed, has integration tests (set OSRM_TEST_DATA_PATH to your .osrm file), and benchmarks you can run with cargo bench.
If you're building anything in the routing/geospatial space in Rust and hitting performance ceilings with HTTP, give it a try.



