Skip to content

Rust powered Backend ๐Ÿฆ€

๐Ÿงช An experimental backend written in Rust is available for multiprocess support for both sync & async applications.

This removes the need for the redis library dependency & allows the library to offer the same interface to either synchronous applications or asyncio based ones.

Under the hood a different thread will handle the change of values & pipeline requests, this makes it extremely fast to do operations like inc() on a metric in your application since it will always be handled asynchronously & in parallel.



pip install pytheus-backend-rs

then just load it like you would do with any other backends:

from pytheus_backend_rs import RedisBackend

    backend_config={"host": "", "port": 6379},


When calling generate_metrics it should be done in a ThreadPool for async applications. A framework like FastAPI does this automatically if your endpoint is defined as def.

def pytheus_metrics():
    return generate_metrics()

The GIL gets released while waiting for the result so that other operations can run in parallel.


Logs will be emitted about threads initialization and errors if they happen. (ex. impossible to connect to redis)

To see them just configure the python logging library, for example:

import logging


The logging needs to be setup before loading the backend with the load_backend function.

INFO:pytheus_backend_rs:Starting pipeline thread....0
INFO:pytheus_backend_rs:Starting pipeline thread....1
INFO:pytheus_backend_rs:Starting pipeline thread....2
INFO:pytheus_backend_rs:Starting pipeline thread....3
INFO:pytheus_backend_rs:Starting BackendAction thread....
INFO:pytheus_backend_rs:RedisBackend initialized


The Rust powered RedisBackend is safe to be used in async applications!

The only detail is that for now it is preferable to use the generate_metrics function inside a ThreadPool for such applications.

Performance vs Python โšก๏ธ

The Rust backend library includes single process implementations as well. One equal to the Python implementation using Mutexes and one using Atomics.


  • SingleProcessBackend for the Mutex.
  • SingleProcessAtomicBackend for the Atomic.

A simple benchmark with having a single Counter incrementing with 1 billion iterations shows that there are performance improvements coming from just using the same implementation in Rust.

Between the Mutex and Atomic approach the performance gain is small, with the atomics approach being slightly faster.

1_000_000_000 iterations

Implementation Time taken (seconds)
Python Mutex ๐Ÿข 325~
Rust Mutex ๐Ÿ‡ 223~
Rust Atomic ๐Ÿ‡ 215~